Also slightly extended the check test 'framework' to allow registering side-dependency and using them from another module. This allows to check the interplay between opaque type from within and outside of their host module.
The main trick here was transforming Assignment
to contain `Vec<UntypedPattern, Option<Annotation>>`
in a field called patterns. This then meant that I
could remove the `pattern` and `annotation` field
from `Assignment`. The parser handles `=` and `<-`
just fine because in the future `=` with multi
patterns will mean some kind of optimization on tuples.
But, since we don't have that optimization yet, when
someone uses multi patterns with an `=` there will be an
error returned from the type checker right where `infer_seq`
looks for `backpassing`. From there the rest of the work
was in `Project::backpassing` where I only needed to rework
some things to work with a list of patterns instead of just one.
This is more holistic and less awkward than having monadic bind working only with some pre-defined type. Backpassing work with _any_ function, and can be implemented relatively easily by rewriting the AST on-the-fly.
Also, it is far easier to explain than trying to explain what a monadic bind is, how its behavior differs from type to type and why it isn't generally available for any monadic type.
This allows for a more fine-grained control over how the traces are showed. Now users can instrument the compiler to preserve only their user-defined traces, or the only the compiler, or all, or none. We also want to add another trace level on top of that: 'compact' to only show line numbers; which will work for both user-defined and/or compiler-generated traces.
This improves error messages for `a |> b(x)`.
We need to do a special check when looping over the args
and unifying. This information is within a function that does not belong
to pipe typer so I used a closure to forward along a way to add
metadata to the error when the first argument in the loop has a
unification error. Simply adding the metadata at the pipe typer
level is not good enough because then we may annotate regular
unification errors from the args.
When rendering missing or redundant patterns, linked-list would
wrongly suggest the last nil constructor as a pattern on non-empty
list.
For example, before this commit, the exhaustivness checker would yield:
```
[(_, True), []]
```
as a suggestion, for being the result of being a list pattern with a
single argument being `(_, True) :: Nil`. Blindly following the
compiler suggestion here would cause a type unification error (since
`[]` doesn't unify with a 2-tuple).
Indeed, we mustn't render the Nil constructor when rendering non-empty
lists! So the correct suggestion should be:
```
[(_, True)]
```
@MartinSchere noticed a weird error
where an unknown variable wasn't being reported
the type checker was incorrectly scoping
arguments for anonymous function definitions.
Luckily his compilation failed due to a FreeUnique
error during code gen which is good. But this may
have been the source of other mysterious FreeUnique
errors.
I also noticed that anonymous function allowed
arguments with the same name to be defined.
`fn(arg, arg)`
This now returns an error.
Params being unused were being incorrectly reported.
This was because params need to be initialized
at a scope above both the validator functions. This
manifested when using a multi-validator where one of
the params was not used in both validators.
The easy fix was to add a field called
`is_validator_param` to `ArgName`. Then
when infering a function we don't initialize args
that are validator params. We now handle this
in a scope that is created before in the match branch for
validator in the `infer_definition` function. In there
we call `.in_new_scope` and initialize params for usage
detection.
Isolated doc comments causes the compiler to panic with:
```
'no consecutive empty lines'
```
This is reproducible when doc comments are wrapped in sandwich between
comments and newlines.
The typed-AST produced as a result of type-checking the program will
no longer contain unused let-bindings. They still raise warnings in
the code so that developers are aware that they are being ignore.
This is mainly done to prevent mistakes for people coming from an
imperative background who may think that things like:
```
let _ = foo(...)
```
should have some side-effects. It does not, and it's similar to
assigned variables that are never used / evaluated. We now properly
strip those elements from the AST when encountered and raise proper
warnings, even for discarded values.
This will probably save people minutes/hours of puzzled debugging. This is only a warning because there may be cases where one do actually want to specify an hex-encoded bytearray. In which case, they can get rid of the warning by using the plain bytearray syntax (i.e. as an array of bytes).
The core observation is that **in the context of Aiken** (i.e. on-chain logic)
people do not generally want to use String. Instead, they want
bytearrays.
So, it should be easy to produce bytearrays when needed and it should
be the default. Before this commit, `"foo"` would parse as a `String`.
Now, it parses as a `ByteArray`, whose bytes are the UTF-8 bytes
encoding of "foo".
Now, to make this change really "fool-proof", we now want to:
- [ ] Emit a parse error if we parse a UTF-8 bytearray literal in
place where we would expect a `String`. For example, `trace`,
`error` and `todo` can only be followed by a `String`.
So when we see something like:
```
trace "foo"
```
we know it's a mistake and we can suggest users to use:
```
trace @"foo"
```
instead.
- [ ] Emit a warning if we ever see a bytearray literals UTF-8, which
is either 56 or 64 character long and is a valid hexadecimal string.
For example:
```
let policy_id = "29d222ce763455e3d7a09a665ce554f00ac89d2e99a1a83d267170c6"
```
This is _most certainly_ a mistake, as this generates a ByteArray of
56 bytes, which is effectively the hex-encoding of the provided string.
In this scenario, we want to warn the user and inform them they probably meant to use:
```
let policy_id = #"29d222ce763455e3d7a09a665ce554f00ac89d2e99a1a83d267170c6"
```
This caused me some trouble. In my first approach, I ended up having
multiple traces because nested values would be evaluated twice; once
as condition, and once as part of the continuation.
To prevent this, we can simply evaluate the condition once, and return
plain True / False boolean as outcome. So this effectively transforms any
expression:
```
expr
```
as
```
if expr { True } else { trace("...", False) }
```
Interestingly enough, chumsky seems to fail when given a 'choice' with
more than 25 elements. That's why this commit groups together some of
the choices as another nested 'choice'.
This however enforces that the argument unifies to a `String`. So this
is more flexible than the previous form, but does fundamentally the
same thing.
Fixes#378.