Also, this commit makes `apply_term` automatically re-intern the
program since it isn't safe to apply any term onto a UPLC program. In
particular, terms that introduce new let-bindings (via lambdas) will
mess with the already generated DeBruijn indices.
The problem doesn't occur for pure constant terms like Data. So we
still have a safe and fast version 'apply_data' when needed.
This allows for a more fine-grained control over how the traces are showed. Now users can instrument the compiler to preserve only their user-defined traces, or the only the compiler, or all, or none. We also want to add another trace level on top of that: 'compact' to only show line numbers; which will work for both user-defined and/or compiler-generated traces.
We rely on some errors to just bubble up and get printed.
By matching on result at the top level like this we blocked some
error messages from being able to be printed. For me this showed up
when `cargo run -- new thing/thing` printed nothing even when there
was an existing `thing` folder. It has already been the pattern for
sometime for some subcommands to handle calling process::exit(1) in
situations where it needs to handle error reporting more specially. It
may seem lame, hacky, or repetitive but it's easy to maintain and read.
Also removed the 'clear' flag to do it by default instead of clogging
the terminal view.
This now works pretty nicely, and the logic is back under
`aiken_project`.
Rather than have this logic in the aiken binary, this provides a generic
mechanism to do "something" on file change events. KtorZ is going to
handle wiring it up to the CLI in the best way for the project.
I tried to write some tests for this, but it's hard to isolate the
watcher logic without wrestling with the borrow checker, or overly
neutering this utility.
This adds the following command
```
aiken watch
```
There are some open questions to answer, though:
- I really like the ergonomics of `aiken watch`; but it also makes sense
as a flag to `aiken check` or `aiken build` etc.; should we just
support the flag, the command, or both?
- Right now I duplicated the with_project method, because it forces
process::exit(1); Should we refactor this, and if so, how?
- Are there other configuration options we want?
There's really no scenario where we want to generate boilerplate that
always end up being removed. In particular, the boilerplate breaks
tutorial as it generate conflicting validators in the blueprint.
The only argument in favor of the boilerplate is to serve as example
and show people some syntax reminder. However, this is better done in
the README or on the user manual directly.
Similar to blueprint address and blueprint policy, this just prints the
hash of the validator; useful if you need the hash, and you don't want
to pipe the address to a bech32 decoder and juggle the hex.
Computes the policy ID of a minting policy; added guards for blueprint address to check that it's not a minting policy; Wasn't 100% sure where the errors should live, so I'm happy to move them if there's objections
closes#553
* rename flat to encode
* rename unflat to decode
* alias both to their old names
* both only print to stdout
use can pipe to file
* split cbor and hex flags
* hex flag works for either cbor or flat
* encode takes --to flag
[name, named-debruijn, debruijn]
* decode takes --from flag
[name, named-debruijn, debruijn]
The apply command now works only from a serialized CBOR data (instead of a UPLC syntax). So it is no longer possible to specify arbitrary cbor terms through the CLI. I believe it to be an acceptable limitation for now; especially given that Aiken will never generate blueprints with non-data terms at the interface boundary.
These were needed before as a way to _partially deserialize_
blueprints. Indeed, some commands required accessing information of
the blueprint, but not necessarily the schema. So out of laziness (or
cleverness?), we only deserialized validators as serde::Value and
achieved that through the use of generics.
Now that validators and schemas have proper deserialisers, we can
simply deserialize a blueprint.
TODO: Our serialisation/deserialisation is safe with regards to
itself; i.e. it roundtrips. However, we only supports a subset of the
specified blueprint format. For example, we would fail to deserialize
blueprints that have inline data-schemas (we only use references).
This was a bit tricky and I ended up breaking things down a lot and
trying different path. This commit is the result of the most
satisfying one.
It introduces a new 'concept' and types: Definitions and Reference.
These elements are meant to reflect JSON pointers and JSON-schema
definitions which we now use for pretty much all user-defined
data-types.
In fact, Schemas are no longer inlined, but are always referencing
some schema under "definitions".
This indirection is necessary in order to cope with recursive types.
And while it's only truly necessary for recursive types, using it
consistently makes it both easier to produce and easier to consume.
---
The blueprint generation for recursive types here also works thanks to
the 'Definitions' data-structure wrapper around a BTreeMap. This uses
a strategy where:
(1) schemas are only generated if they haven't been seen before
(2) schemas are marked as seen BEFORE actually being generated (to
effectively stop a recursive generation).
This relies on one important aspect: the key must be uniquely
identifying a given schema. Which means that we have to monomorphize
data-types with generic parameters also here, and use keys that are
specialized in one data-type.
---
In this large overhaul we've also lost one thing which I didn't bother
re-introducing yet to keep the work manageable: title for record
fields. Before, we use to pull those from record constructor when
available, yet now, every record constructor has been replaced by a
`$ref`. We could theoritically attach a title to the reference. I'll
try to quickly add that in a later commit.