True corresponds to Constr=1 and False corresponds to Constr=0; their position in the vector shall reflect that. Note that while this would in principle impact codegen for any other type, it doesn't for bool since we likely never looked up this type definition since it is well-known. It does now as the 'reify' function relies on this. Whoopsie.
This is very very rough at the moment. But it does a couple of thing:
1. The 'ArgVia' now contains an Expr/TypedExpr which should unify to a Fuzzer. This is to avoid having to introduce custom logic to handle fuzzer referencing. So this now accepts function call, field access etc.. so long as they unify to the right thing.
2. I've done quite a lot of cleanup in aiken-project mostly around the tests and the naming surrounding them. What we used to call 'Script' is now called 'Test' and is an enum between UnitTest (ex-Script) and PropertyTest. I've moved some boilerplate and relevant function under those module Impl.
3. I've completed the end-to-end pipeline of:
- Compiling the property test
- Compiling the fuzzer
- Generating an initial seed
- Running property tests sequentially, threading the seed through each step.
An interesting finding is that, I had to wrap the prop test in a similar wrapper that we use for validator, to ensure we convert primitive types wrapped in Data back to UPLC terms. This is necessary because the fuzzer return a ProtoPair (and soon an Array) which holds 'Data'.
At the moment, we do nothing with the size, though the size should ideally grow after each iteration (up to a certain cap).
In addition, there are a couple of todo/fixme that I left in the code as reminders of what's left to do beyond the obvious (error and success reporting, testing, etc..)
We've been wrongly representing large ints as BigInt, causing them to
behave differently in the VM through builtins like 'serialise_data'.
Indeed, we expect anything that fits in 8 bytes to be encoded as Major
Type 0 or 1. But we were switching to encoding as Major type 6
(tagged, PosBigInt, NegBigInt) for much smaller values! Anything
outside of the range [-2^32, 2^32-1] would be treated as big int
(positive or negative).
Why? Because we checked whether a value i would fit in an i64, and if
it didn't we treated it as big int. But the reality is more subtle...
Fortunately, Rust has i128 and the minicbor library implements TryFrom
which enforces that the value fits in a range of [-2^64, 2^64 - 1], so
we're back on track easily.
This improves error messages for `a |> b(x)`.
We need to do a special check when looping over the args
and unifying. This information is within a function that does not belong
to pipe typer so I used a closure to forward along a way to add
metadata to the error when the first argument in the loop has a
unification error. Simply adding the metadata at the pipe typer
level is not good enough because then we may annotate regular
unification errors from the args.
The 'HEAD' call that is done to resolve package revisions from
unpinned versions is already quite cheap, but it would still be better
to avoid overloading Github with such calls; especially for users of a
language-server that would compile on-the-fly very often. Upstream
packages don't change often so there's no need to constantly check the
etag.
So we now keep a local version of etags that we fetched, as well as a
timestamp from the last time we fetched them so that we only re-fetch
them if more than an hour has elapsed. This should be fairly resilient
while still massively improving the UX for people showing up after a
day and trying to use latest 'main' features.
This means that we now effectively have two caching levels:
- In the manifest, we store previously fetched etags.
- In the filesystem, we have a cache of already downloaded zip archives.
The first cache is basically invalidated every hour, while the second
cache is only invalidated when a etag changes. For pinned versions,
nothing is invalidated as they are considered immutable.
It was not consuming the next case if there was no condition being checked in the clause.
Now it properly always consumes the next clause unless last clause.
closes#553
* rename flat to encode
* rename unflat to decode
* alias both to their old names
* both only print to stdout
use can pipe to file
* split cbor and hex flags
* hex flag works for either cbor or flat
* encode takes --to flag
[name, named-debruijn, debruijn]
* decode takes --from flag
[name, named-debruijn, debruijn]