- Add support to the formatter for these doc comments
- Add a new field to `Arg` `doc: Option<String>`
- Don't attach docs immediately after typechecking a module
- instead we should do it on demand in docs, build, and lsp
- the check command doesn't need to have any docs attached
- doing it more lazily defers the computation until later making
typechecking feedback a bit faster
- Add support for function arg and validator param docs in
`attach_module_docs` methods
- Update some snapshots
- Add put_doc to Arg
closes#685
This improves error messages for `a |> b(x)`.
We need to do a special check when looping over the args
and unifying. This information is within a function that does not belong
to pipe typer so I used a closure to forward along a way to add
metadata to the error when the first argument in the loop has a
unification error. Simply adding the metadata at the pipe typer
level is not good enough because then we may annotate regular
unification errors from the args.
We do not actually every parse negative values in there, as a negative value is a combination of a 'Negate' and 'UInt' expression.
However, for patterns and constant, it'll be simpler to parse whole Int values as there's no ambiguity with arithmetic operations
there. To avoid confusion of having some 'Int' constructors containing only non-negative values, and some being on the whole range,
I've renamed the constructor to 'UInt' to make this more obvious.
@MartinSchere noticed a weird error
where an unknown variable wasn't being reported
the type checker was incorrectly scoping
arguments for anonymous function definitions.
Luckily his compilation failed due to a FreeUnique
error during code gen which is good. But this may
have been the source of other mysterious FreeUnique
errors.
I also noticed that anonymous function allowed
arguments with the same name to be defined.
`fn(arg, arg)`
This now returns an error.
Params being unused were being incorrectly reported.
This was because params need to be initialized
at a scope above both the validator functions. This
manifested when using a multi-validator where one of
the params was not used in both validators.
The easy fix was to add a field called
`is_validator_param` to `ArgName`. Then
when infering a function we don't initialize args
that are validator params. We now handle this
in a scope that is created before in the match branch for
validator in the `infer_definition` function. In there
we call `.in_new_scope` and initialize params for usage
detection.
And disable multi-patterns clauses. I was originally just controlling
whether we did disable that from the parser but then I figured we
could actually support multi-patterns clauses quite easily by simply
desugaring a multi-pattern into multiple clauses.
This is only a syntactic sugar, which means that the cost of writing
that on-chain is as expensive as writing the fully expanded form; yet
it seems like a useful shorthand; especially for short clause
expressions.
This commit however disables multi-pattern when clauses, which we do
not support in the code-generation. Instead, one pattern on tuples for
that.
The typed-AST produced as a result of type-checking the program will
no longer contain unused let-bindings. They still raise warnings in
the code so that developers are aware that they are being ignore.
This is mainly done to prevent mistakes for people coming from an
imperative background who may think that things like:
```
let _ = foo(...)
```
should have some side-effects. It does not, and it's similar to
assigned variables that are never used / evaluated. We now properly
strip those elements from the AST when encountered and raise proper
warnings, even for discarded values.
It's generally a bad idea to use equality on enum variants because this won't trigger any compiler errors in the future yet could have hazardous effects if adding new variants. So it's usually preferrable to use exauhstive pattern matching and let the compiler warn missing cases in places where it matters.
It is now possible to leave a hole in a type annotation and have the compiler fill-in the expected type of us.
This is a pretty useful debugging tool when playing with complex functions.
Rules are now as follows:
- If a pipeline contains a newline, then the entire pipeline is formatted over multiple lines.
- If it doesn't, then it's formatted as a single-line UNLESS it cannot fit; in which case, we fallback to multiline again.
-Builitins IR now acts like Record IR in terms of argument consumption
-UnConstrData returns as Pair(Data,Data) to conform with how pairs are treated behind the scenes.
This will probably save people minutes/hours of puzzled debugging. This is only a warning because there may be cases where one do actually want to specify an hex-encoded bytearray. In which case, they can get rid of the warning by using the plain bytearray syntax (i.e. as an array of bytes).
This is not supported by the code generation, so it's a bit of a lie
to have them in the language in the first place. There's arguably not
even any use for constant records, list and tuples to begin with. So
this cleans this up everywhere for the sake of moving forward with the
alpha release.
This now reduces constants to:
- Integer
- ByteArray
- String
Anything else can be declared via a function anyway. We can revisit
this choice later.... or not.
This caused me some trouble. In my first approach, I ended up having
multiple traces because nested values would be evaluated twice; once
as condition, and once as part of the continuation.
To prevent this, we can simply evaluate the condition once, and return
plain True / False boolean as outcome. So this effectively transforms any
expression:
```
expr
```
as
```
if expr { True } else { trace("...", False) }
```
The goal is to handle this without bothering the code generation down the line. That is, we can handle it when transforming from the untyped AST to the typed one. That's why there's no 'TraceIfFalse' constructor in the typed AST. It has disappeared during type-check.
Todo is fundamentally just a trace and an error. The only reason we kept it as a separate element in the AST is for the formatter to work out whether it should format something back to a todo or something else.
However, this introduces redundancy in the code internally and makes the AIR more complicated than it needs to be. Both todo and errors can actually be represented as trace + errors, and we only need to record their preferred shape when parsing so that we can format them back to what's expected.
We now parse errors as a combination of a trace plus and error term. This is a baby step in order to simplify the code generation down the line and the internal representation of todo / errors.
This however enforces that the argument unifies to a `String`. So this
is more flexible than the previous form, but does fundamentally the
same thing.
Fixes#378.
I decided to invert how I'm doing it. I'm passing
in a new argument to unify in environment called
allow_cast: bool and essentially at various
unification sites I can control whether or not I
want to allow casting to even occur. So we can
assume it's false by default always and then we
turn it on in a few places vs. just opening the
flood gates and locking it down at various sites
as they come up# Please enter the commit message
for your changes. Lines starting
* you cannot cast FROM Data with a `let`
* you cannot cast FROM Data by passing
Data to none Data when calling a function
* you MUST use `assert` to cast from data
* you can cast INTO Data with a `let`
* you can cast INTO Data by passing none Data
to Data when calling a function
* You cannot assert cast Data without an
annotation