We simply provide a flag with a free-form output which acts as
the module to lookup in the 'env' folder. The strategy is to replace
the environment module name on-the-fly when a user tries to import
'env'.
If the environment isn't found, an 'UnknownModule' error is raised
(which I will slightly adjust in a following commits to something more
related to environment)
There are few important consequences to this design which may not seem
immediately obvious:
1. We parse and type-check every env modules, even if they aren't
used. This ensures that code doesn't break with a compilation error
simply because people forgot to type-check a given env.
Note that compilation could still fail because the env module
itself could provide an invalid API. So it only prevents each
modules to be independently wrong when taken in isolation.
2. Technically, this also means that one can import env modules in
other env modules by their names. I don't know if it's a good or
bad idea at this point but it doesn't really do any wrong;
dependencies and cycles are handlded all-the-same.
- Doesn't allow pattern-matching on G1/G2 elements and strings,
because the use cases for those is unclear and it adds complexity to
the feature.
- We still _parse_ patterns on G1/G2 elements and strings, but emit an
error in those cases.
- The syntax is the same as for bytearray literals (i.e. supports hex,
utf-8 strings or plain arrays of bytes).
There are currently two zero-arg builtins:
- mkNilData
- mkNilPairData
And while they have strictly speaking no arguments, the VM still
requires that they are called with an extra unit argument applied.
While this builtin is readily available through the Aiken syntax
`[head, ..tail]`, there's no reason to not support its builtin form
even though we may not encourage its usage. For completeness and to
avoid bad surprises, it is now supported.
Fixes#964.
Actually, this has been a bug for a long time it seems. Calling any
prelude functions using a qualified import would result in a codegen
crash. Whoopsie.
This is now fixed as shown by the regression test.
This is not fully satisfactory as it pollutes a bit the prelude. Ideally, those functions should only be visible
and usable by the underlying trace code. But for now, we'll just go with it.
Also, this commit makes `apply_term` automatically re-intern the
program since it isn't safe to apply any term onto a UPLC program. In
particular, terms that introduce new let-bindings (via lambdas) will
mess with the already generated DeBruijn indices.
The problem doesn't occur for pure constant terms like Data. So we
still have a safe and fast version 'apply_data' when needed.
This was a mess to say to the least. The mess started when we wanted
to make all definitions in codegen use immutable maps of references --
which was and still is a good idea. Yet, the population of the data
types and functions definitions was done somehow in a separate step,
in a rather ad-hoc manner.
This commit changes that to ensure the project's data_types and
functions are populated while type checking the AST such that we need
not to redo it after.
The code for registering the data type definitions and function
definitions was also duplicated in at least 3 places. It is now a
method of the TypedModule.
Note: this change isn't only just cosmetic, it's also necessary for
the commit that follows which aims at adding tests to the set of
available function definitions, thus allowing to make property tests
callable.
This is very very rough at the moment. But it does a couple of thing:
1. The 'ArgVia' now contains an Expr/TypedExpr which should unify to a Fuzzer. This is to avoid having to introduce custom logic to handle fuzzer referencing. So this now accepts function call, field access etc.. so long as they unify to the right thing.
2. I've done quite a lot of cleanup in aiken-project mostly around the tests and the naming surrounding them. What we used to call 'Script' is now called 'Test' and is an enum between UnitTest (ex-Script) and PropertyTest. I've moved some boilerplate and relevant function under those module Impl.
3. I've completed the end-to-end pipeline of:
- Compiling the property test
- Compiling the fuzzer
- Generating an initial seed
- Running property tests sequentially, threading the seed through each step.
An interesting finding is that, I had to wrap the prop test in a similar wrapper that we use for validator, to ensure we convert primitive types wrapped in Data back to UPLC terms. This is necessary because the fuzzer return a ProtoPair (and soon an Array) which holds 'Data'.
At the moment, we do nothing with the size, though the size should ideally grow after each iteration (up to a certain cap).
In addition, there are a couple of todo/fixme that I left in the code as reminders of what's left to do beyond the obvious (error and success reporting, testing, etc..)
The parameter is special as it takes no annotation but a 'via' keyword followed by an expression that should unify to a Fuzzer<a>, where Fuzzer<a> = fn(Seed) -> (Seed, a). The current commit only allow name identifiers for now. Ultimately, this may allow full expressions.