Using ByteArrays as vectors on-chain is a lot more efficient than relying on actul Data's list of values. From the Rust end, it doesn't change much as we were already manipulating vectors anyway.
Also, this commit makes `apply_term` automatically re-intern the
program since it isn't safe to apply any term onto a UPLC program. In
particular, terms that introduce new let-bindings (via lambdas) will
mess with the already generated DeBruijn indices.
The problem doesn't occur for pure constant terms like Data. So we
still have a safe and fast version 'apply_data' when needed.
This was a mess to say to the least. The mess started when we wanted
to make all definitions in codegen use immutable maps of references --
which was and still is a good idea. Yet, the population of the data
types and functions definitions was done somehow in a separate step,
in a rather ad-hoc manner.
This commit changes that to ensure the project's data_types and
functions are populated while type checking the AST such that we need
not to redo it after.
The code for registering the data type definitions and function
definitions was also duplicated in at least 3 places. It is now a
method of the TypedModule.
Note: this change isn't only just cosmetic, it's also necessary for
the commit that follows which aims at adding tests to the set of
available function definitions, thus allowing to make property tests
callable.
Those end-to-end tests are useful. Both for controlling the behavior of the shrinker, but also to double check the reification of Plutus Data back into untyped expressions.
I had to work-around a few things to get opaque type and private types play nice. Also found a weird bug due to how we apply parameters after unique debruijn indexes have been also applied. A work-around is to re-intern the program.
True corresponds to Constr=1 and False corresponds to Constr=0; their position in the vector shall reflect that. Note that while this would in principle impact codegen for any other type, it doesn't for bool since we likely never looked up this type definition since it is well-known. It does now as the 'reify' function relies on this. Whoopsie.
This is very very rough at the moment. But it does a couple of thing:
1. The 'ArgVia' now contains an Expr/TypedExpr which should unify to a Fuzzer. This is to avoid having to introduce custom logic to handle fuzzer referencing. So this now accepts function call, field access etc.. so long as they unify to the right thing.
2. I've done quite a lot of cleanup in aiken-project mostly around the tests and the naming surrounding them. What we used to call 'Script' is now called 'Test' and is an enum between UnitTest (ex-Script) and PropertyTest. I've moved some boilerplate and relevant function under those module Impl.
3. I've completed the end-to-end pipeline of:
- Compiling the property test
- Compiling the fuzzer
- Generating an initial seed
- Running property tests sequentially, threading the seed through each step.
An interesting finding is that, I had to wrap the prop test in a similar wrapper that we use for validator, to ensure we convert primitive types wrapped in Data back to UPLC terms. This is necessary because the fuzzer return a ProtoPair (and soon an Array) which holds 'Data'.
At the moment, we do nothing with the size, though the size should ideally grow after each iteration (up to a certain cap).
In addition, there are a couple of todo/fixme that I left in the code as reminders of what's left to do beyond the obvious (error and success reporting, testing, etc..)
The parameter is special as it takes no annotation but a 'via' keyword followed by an expression that should unify to a Fuzzer<a>, where Fuzzer<a> = fn(Seed) -> (Seed, a). The current commit only allow name identifiers for now. Ultimately, this may allow full expressions.
We cannot enforce internal invariants on opaque types from only structural checks on Data. Thus, it is forbidden to find an opaque type in an outward-facing interface. Instead, users should rely on intermediate representations and lift them into opaque types using constructors and methods provided by the type (e.g. Dict.from_list, Rational.from_int, Rational.new, ...)
We've been wrongly representing large ints as BigInt, causing them to
behave differently in the VM through builtins like 'serialise_data'.
Indeed, we expect anything that fits in 8 bytes to be encoded as Major
Type 0 or 1. But we were switching to encoding as Major type 6
(tagged, PosBigInt, NegBigInt) for much smaller values! Anything
outside of the range [-2^32, 2^32-1] would be treated as big int
(positive or negative).
Why? Because we checked whether a value i would fit in an i64, and if
it didn't we treated it as big int. But the reality is more subtle...
Fortunately, Rust has i128 and the minicbor library implements TryFrom
which enforces that the value fits in a range of [-2^64, 2^64 - 1], so
we're back on track easily.
While looking at some code, I noticed that this
warning would show up even if an error for a
non-exhaustive when/is shows up for the same when/is
expression. This isn't a useful situation to show this
warning because things are not exhaustive yet so we should
let the user finish and only provide the errors. If things
are exhaustive then the code proceeds and if a warning was set
when there's only one clause pattern then this warning message
can be pushed because that's when it's actually useful.
This commit allows Data to be optionally annotated with a
phantom-type. This doesn't change anything in codegen but we can now
leverage this information to generate better blueprint schemas.
Note that the formatter rewrite parens-block sequences as curly-block
sequences anyway. Albeit weird looking syntax, they are valid
nonetheless.
I also clarified a bit the hints and description of the
'illegal::return' error as it would mistakenly say 'function' instead
of 'block'.
- do not erase sequences if the sole expression is an assignment
- emit parse error if an assignment is assigned to an assignment
- do not allow assignments in logical op chains
This reverts commit 21f0b3a6220fdafb8f6aad6855de89d8cdde0e1b.
Rationale:
The absence of clause guard was here done *on purpose*. Indeed,
introducing a clause guard here forces either duplication or the use
of a wildcard which is not "future proof".
Should we make a change to that one day (e.g. add a new variant to
TraceLevel), we won't get any compiler warning and we'll very likely
forget to update that particular section of the code.
So as much as possible, enforce complete pattern-match on variants
make for code that is easier to maintain in the long-run.
This allows for a more fine-grained control over how the traces are showed. Now users can instrument the compiler to preserve only their user-defined traces, or the only the compiler, or all, or none. We also want to add another trace level on top of that: 'compact' to only show line numbers; which will work for both user-defined and/or compiler-generated traces.
We rely on some errors to just bubble up and get printed.
By matching on result at the top level like this we blocked some
error messages from being able to be printed. For me this showed up
when `cargo run -- new thing/thing` printed nothing even when there
was an existing `thing` folder. It has already been the pattern for
sometime for some subcommands to handle calling process::exit(1) in
situations where it needs to handle error reporting more specially. It
may seem lame, hacky, or repetitive but it's easy to maintain and read.
This is a *slight* hack / abuse of the code() method as we are now
doing a bit of formatting within that function. Yet, we only do so at
the very top-level (i.e. project's Error) because we can't actually
fiddle with how miette presents errors.
Also removed the 'clear' flag to do it by default instead of clogging
the terminal view.
This now works pretty nicely, and the logic is back under
`aiken_project`.
Rather than have this logic in the aiken binary, this provides a generic
mechanism to do "something" on file change events. KtorZ is going to
handle wiring it up to the CLI in the best way for the project.
I tried to write some tests for this, but it's hard to isolate the
watcher logic without wrestling with the borrow checker, or overly
neutering this utility.
This adds the following command
```
aiken watch
```
There are some open questions to answer, though:
- I really like the ergonomics of `aiken watch`; but it also makes sense
as a flag to `aiken check` or `aiken build` etc.; should we just
support the flag, the command, or both?
- Right now I duplicated the with_project method, because it forces
process::exit(1); Should we refactor this, and if so, how?
- Are there other configuration options we want?
to pass 2 of the conformance tests, we need to make sure
that we aren't typechecking builtin arguments as arguments
are applied. This switches push to by removing the call to check_type
and then reworking all the associated unwrap methods on Value
so that they return the same errors that were being returned before.
feat: impl flat serialization and deserialization for bls constants
feat: started on cost models for the new builtins
Co-authored-by: rvcas <x@rvcas.dev>
- sort alphabetically
- add some of the missing builtins used for ints
- comment on what is "correct" for future additions
- comment on the current remaining missing builtins
- comment on the current incoherent method names
This was somewhat weirdly done, with a boolean 'imported' set on the
formers; but an explicit new warning for values. I don't see the point
of distinguishing them so I just merged them all into a single
warning.
I have however preserved the 'UnusedType' and 'UnusedConstructor'
warnings since they were ALSO used for unused private constructors or
types.
I initially removed the 'UnkownTypeConstructor' since it wasn't used anywhere and was in fact dead-code. On second thoughts however, it is nicer to provide a slightly better error message when a constructor is missing as well as some valid suggestion. Prior to that commit, we would simply return a 'UnknownVariable' and the hint might suggest lowercase identifiers; which is wrong.
I previously missed a case and it causes qualified imports to be added at the end if they are lexicographically smaller than ALL other qualified imports. No big deal, but this is now fixed.
We're going to have more quickfixes, to it's best not to overload the
'server' module. Plus, there's a lot of boilerplate around the
quickfixes so we might want to factor it out.
It's a bit 'off-topic' to keep these in aiken-lang as those functions are really just about lsp. Plus, it removes a bit some of the boilerplate and make the entire edition more readable and re-usable. Now we can tackle other similar errors with the same quickfix.
This removes the need to rely on the formatter to clear things up
after insert a new import. While this is not so useful for imports, I
wanted to experiment with the approach for future similar edits (for
example, when suggesting an inline rewrite).
- Add support to the formatter for these doc comments
- Add a new field to `Arg` `doc: Option<String>`
- Don't attach docs immediately after typechecking a module
- instead we should do it on demand in docs, build, and lsp
- the check command doesn't need to have any docs attached
- doing it more lazily defers the computation until later making
typechecking feedback a bit faster
- Add support for function arg and validator param docs in
`attach_module_docs` methods
- Update some snapshots
- Add put_doc to Arg
closes#685
This improves error messages for `a |> b(x)`.
We need to do a special check when looping over the args
and unifying. This information is within a function that does not belong
to pipe typer so I used a closure to forward along a way to add
metadata to the error when the first argument in the loop has a
unification error. Simply adding the metadata at the pipe typer
level is not good enough because then we may annotate regular
unification errors from the args.
Now we "handle" vars that call the cyclic function.
That includes vars in the cyclic function as well as in other functions
"handle" meaning we modify the var to be a call that takes in more arguments.
The 'HEAD' call that is done to resolve package revisions from
unpinned versions is already quite cheap, but it would still be better
to avoid overloading Github with such calls; especially for users of a
language-server that would compile on-the-fly very often. Upstream
packages don't change often so there's no need to constantly check the
etag.
So we now keep a local version of etags that we fetched, as well as a
timestamp from the last time we fetched them so that we only re-fetch
them if more than an hour has elapsed. This should be fairly resilient
while still massively improving the UX for people showing up after a
day and trying to use latest 'main' features.
This means that we now effectively have two caching levels:
- In the manifest, we store previously fetched etags.
- In the filesystem, we have a cache of already downloaded zip archives.
The first cache is basically invalidated every hour, while the second
cache is only invalidated when a etag changes. For pinned versions,
nothing is invalidated as they are considered immutable.
And so, even for unpinned package. In this case, we can't do a HEAD request. So we fallback by looking at what's available in the cache and using the most recently downloaded version from the cache. This is only a best effort as the most recently downloaded one may not be the actual latest. But common, this is a case where (a) someone didn't pin any version, (b) is trying to build on in an offline setup. We could possibly make that edge-case better but, let's see if anyone ever complains about it first.
When the version isn't a git sha or a tag, we always check that we got
the last version of a particular dependency before building. This is
to avoid those awkward moments where someone try to use something from
the stdlib that is brand new, and despite using 'main' they get a
strange build failure regarding how it's not available.
An important note is that we don't actually re-download the package
when the case occurs; we merely check an HTTP ETag from a (cheap) 'HEAD'
request on the package registry. If the tag hasn't changed then that
means the local version is correct.
The behavior is completely bypassed if the version is specified using
a git sha or a tag, as here, we can assume that fetching it once it
enough (and that it can change). If a package maintainer force-pushed
a tag however, there may be discrepency and the only way around that
is to `rm -r ./build`.
Best-effort to assert whether a version refers is a git sha digest or a tag. When it is, we
avoid re-downloading it if it's already fetched. But when it isn't, and thus refer to a branch,
we always re-download it. Note however that the download might be short-circuited by the
system-wide package cache, so a download doesn't actually mean a network request.
The package cache is however smart-enough to assert whether a package in the cache must be
re-downloaded (using HTTP ETag). So this is mostly about delegating the re-downloading logic to
the global packages cache.
Bumped into this randomly. We do correctly parse escape sequence, but
the format would simply but the unescaped string back on save. Now it
properly re-escapes strings before flushing them back. I also removed
the escape sequence for 'backspace' and 'new page' form feed as I
don't see any use case for those in an Aiken program really...
There's really no scenario where we want to generate boilerplate that
always end up being removed. In particular, the boilerplate breaks
tutorial as it generate conflicting validators in the blueprint.
The only argument in favor of the boilerplate is to serve as example
and show people some syntax reminder. However, this is better done in
the README or on the user manual directly.
fix: Opaque types are now properly handled in code gen (i.e. code gen functions, in datums/redeemers, in from data casts)
chore: add specific nested opaque type tests to code gen
I originally didn't add this because I thought this was mutually
recursive functions, which I couldn't picture how that would work;
I refactored all this logic into modify_self_calls, which maybe needs a
better name now.
Perf gain on some stdlib tests (line concat tests) is 93%!!
We also flip the recursive_statics fields to recursive_nonstatics; This makes the codegen a little easier. It also has a hacky way to hard code in some recursive statics for testing
Any methods to a recursive function that are unchanged and forwarded
don't need to be applied each time we recurse; instead, you can
define a containing lambda, reducing the number of applications
dramatically when recursing
feat: finish expect type on data constr
fix: tuple clause was exposing all items regardless of discard
fix: tuple clause was not receiving complex_clause flag
fix: condition for assert where constructor had 0 args was tripping assert
fix: had to rearrange var and discard assignment to ensure correct val is returned
fix: binop had the wrong type
When rendering missing or redundant patterns, linked-list would
wrongly suggest the last nil constructor as a pattern on non-empty
list.
For example, before this commit, the exhaustivness checker would yield:
```
[(_, True), []]
```
as a suggestion, for being the result of being a list pattern with a
single argument being `(_, True) :: Nil`. Blindly following the
compiler suggestion here would cause a type unification error (since
`[]` doesn't unify with a 2-tuple).
Indeed, we mustn't render the Nil constructor when rendering non-empty
lists! So the correct suggestion should be:
```
[(_, True)]
```
Similar to blueprint address and blueprint policy, this just prints the
hash of the validator; useful if you need the hash, and you don't want
to pipe the address to a bech32 decoder and juggle the hex.
This was trickier than expected as the expression parser, and in particular the bin-op parser will interpret negative patterns as a continuation of a binary operation and eventually choke on the next right-arrow symbol. This is due to how we actually completely erase newlines once we're done with the lexer. The newline separating when clause is actually semantically important. In principle, we could only parse an expression until the next newline.
Ideally, we would keep that newline in the list of token but it's difficult to figure out which newline to keep between two right arrows since a clause guard can be written over multiple lines. Though, since we know that this is only truly a problem for negative integers, we can use the same trick as for tuples and define a new 'NewLineMinus' token. That token CANNOT be part of a binop expression. That means it's impossible to write a binary operation with a minus over multiple lines, or more specifically, with the '-' symbol on a newline. This sounds like a fair limitation. What we get in exchange is less ambiguity when parsing patterns following expressions in when clause cases.
Another more cumbersome option could be to preserve the first newline encountered after a 'right-arrow' symbol and before any parenthesis or curly brace is found (which would otherwise signal the beginning of a new block). That requires to traverse, at least partially, the list of tokens twice. This feels unnecessary for now and until we do face a similar issue with a binary operator.
The main goal is to make the parser more reusable to be used for when-clauses, instead of the expression parser. A side goal has been to make it more readable by moving the construction of some untyped expression as method on UntypedExpr. Doing so, I got rid of the extra temporary 'ParseArg' type and re-used the generic 'CallArg' instead by simply using an Option<UntypedExpr> as value to get the same semantic as 'ParseArg' (which would distinguish between plain call args and holes). Now the chained parser is in a bit more reusable state.
We do not actually every parse negative values in there, as a negative value is a combination of a 'Negate' and 'UInt' expression.
However, for patterns and constant, it'll be simpler to parse whole Int values as there's no ambiguity with arithmetic operations
there. To avoid confusion of having some 'Int' constructors containing only non-negative values, and some being on the whole range,
I've renamed the constructor to 'UInt' to make this more obvious.
This was a bit more tricky than anticipated but played out nicely in
the end. Now we have one holistic way of parsing todos and errors
instead of it being duplicated between when/clause and sequence. The
error/todo parser has been moved up to the expression part rather than
being managed when parsing sequences. Not sure what motivated that to
begin with.
Fixes#621.
Alleviate a bit more the top-level expression parser. Note that we
probably need a bit more disciplined in what we export and at what level
because there doesn't seem to be much logic as for whether a parser is
private, exported to the crate only or to the wide open. I'd be in favor
of exporting everything by default.
Also moved the logic for 'int' and 'string' there though it is trivial. Yet, for bytearray, it tidies things nicely by removing them from the 'utils' module.
Equality on a union-type is potentially dangerous as the compiler won't
complain if we add a new case that we don't cover. Reversing the
assignment by yielding a `Token` for a given `AssignmentKind`. This way
we can use a pattern-match that got us covered for future cases.
The 'public' util was arguably not really adding much except a layer of indirection.
In the end, one useful parsing behavior to abstract is the idea of 'optional flag' that we use for both 'pub' and 'opaque' keywords.