Before this commit, we would parse 'Pair' as a user-defined
data-types, and thus piggybacking on that whole record system. While
perhaps handy for some things, it's also semantically wrong and
induces a lot more complexity in codegen which now needs to
systematically distinguish every data-type access between pairs, and
others.
So it's better to have it as a separate expression, and handle it
similar to tuples (since it's fundamentally a 2-tuple with a special
serialization).
And a few more tests along the way for others. Note that it is important here that we try to parse for a 'Pair' BEFORE we try to parse for a constructor pattern. Because the latter would swallow any Pair pattern.
Currently, pattern-matching on 'Pair' is handled by treating Pair as a
record, which comes as slightly odd given that it isn't actually a
record and isn't user-defined. Thus now, every use of a record must
distinguish between Pairs and other kind of records -- which screams
for another variant constructor instead.
We cannot use `Tuple` either for this, because then we have no ways to
tell 2-tuples apart from pairs, which is the whole point here. So the
most sensical thing to do is to define a new pattern `Pair` which is
akin to tuples, but simpler since we know the number of elements and
it's always 2.
We have been a bit too strict on disallowing 'allow_cast' propagations. This is really only problematic for nested elements like Tuple's elements or App's args. However, for linked and unbound var it is probably okay, and it certainly is as well for function arguments.
it seems we can fix this by changing which side
gets subtracted by 1 depending on the op associativity.
BinOp::Or & BinOp::And are right associative while the
other bin ops are left associative.
closes#893
Co-authored-by: Kasey White <kwhitemsg@gmail.com>
This makes the search for counterexample slower in some cases by 30-40% with the hope of finding better counterexamples. We might want to add a flag '--simplification-level' to the command-line to let users decide on the level of simplifications.
And move some logic out of project/lib to be near the CheckedModule
instead. The project API is already quite heavy and long, so making it
more lightweight is generally what we want to tend to.
This changes ensure that we only compile modules from dependencies
that are used (or transitively used) in the project. This allows to
discard entire compilation steps at a module level, for modules that
we do not use.
The main goal of this change isn't performances. It's about making
dependencies management slightly easier in the time we decide whether
and how we want to manage transitive dependencies in Aiken.
A concrete case here is aiken-lang/stdlib, which will soon depend on
aiken-lang/fuzz. However, we do not want to require every single
project depending on stdlib to also require fuzz. So instead, we want
to seggregate fuzz API from stdlib in separate module, and only
compile those if they appear in the pruned dependency graph.
While the goal isn't performances, here are some benchmarks analyzing
the performances of deps pruning on a simple project depends on a few
modules from stdlib:
Benchmark 1: ./aiken-without-deps-pruning check scratchpad
Time (mean ± σ): 190.3 ms ± 101.1 ms [User: 584.5 ms, System: 14.2 ms]
Range (min … max): 153.0 ms … 477.7 ms 10 runs
Benchmark 2: ./aiken-with-deps-pruning check scratchpad
Time (mean ± σ): 162.3 ms ± 46.3 ms [User: 572.6 ms, System: 14.0 ms]
Range (min … max): 142.8 ms … 293.7 ms 10 runs
As we can see, this change seems to have an overall positive impact on
the compilation time.
Also slightly extended the check test 'framework' to allow registering side-dependency and using them from another module. This allows to check the interplay between opaque type from within and outside of their host module.
is_assignment was a bit confusing to me since we do actually categorize expect as 'assignment'. So this is more about whether this is a *let* assignment. Hence 'is_let'.
Discard pattern are _dangerous_ is used recklessly. The problem comes
from maintenance and when adding new fields. We usually don't get any
compiler warnings which may lead to missing spots and confusing
behaviors.
So I have, in some cases, inline discard to explicitly list all
fields. That's a bit more cumbersome to write but hopefully will catch
a few things for us in the future.
The main trick here was transforming Assignment
to contain `Vec<UntypedPattern, Option<Annotation>>`
in a field called patterns. This then meant that I
could remove the `pattern` and `annotation` field
from `Assignment`. The parser handles `=` and `<-`
just fine because in the future `=` with multi
patterns will mean some kind of optimization on tuples.
But, since we don't have that optimization yet, when
someone uses multi patterns with an `=` there will be an
error returned from the type checker right where `infer_seq`
looks for `backpassing`. From there the rest of the work
was in `Project::backpassing` where I only needed to rework
some things to work with a list of patterns instead of just one.
The 3rd kind of assignment kind (Bind) is gone and now reflected through a boolean parameter. Note that this parameter is completely erased by the type-checker so that the rest of the pipeline (i.e. code-generation) doesn't have to make any assumption. They simply can't see a backpassing let or expect.
This is more holistic and less awkward than having monadic bind working only with some pre-defined type. Backpassing work with _any_ function, and can be implemented relatively easily by rewriting the AST on-the-fly.
Also, it is far easier to explain than trying to explain what a monadic bind is, how its behavior differs from type to type and why it isn't generally available for any monadic type.
It might be slightly cleaner and more extensible to change to return a summary, potentially even making track the tests, coverage, etc. so it can be serialized to JSON. But, for now, this is much simpler, and the approach that KtorZ suggested.
We'll piggyback on the tracing capabilities of the VM to provide labelling for prop tests. To ensure we do not interfere with normal traces, we only count traces that starts with a NUL byte as label. That convention is assumed to be known of the companion fuzz library that should then provide the labelling capabilities as a dedicated function.
I've been benchmarking that through the shrink of 'large' lists, and the cache brings about 1.5x speed increase. For small and simple cases, the cache as no visible effects (positive or negative).
Before this commit, we would always show the 'declared form' of type aliases, with their generic, non-instantiated parameters. This now tries to unify the annotation with the underlying inferred type to provide even better alias pretty printing.