The apply command now works only from a serialized CBOR data (instead of a UPLC syntax). So it is no longer possible to specify arbitrary cbor terms through the CLI. I believe it to be an acceptable limitation for now; especially given that Aiken will never generate blueprints with non-data terms at the interface boundary.
These were needed before as a way to _partially deserialize_
blueprints. Indeed, some commands required accessing information of
the blueprint, but not necessarily the schema. So out of laziness (or
cleverness?), we only deserialized validators as serde::Value and
achieved that through the use of generics.
Now that validators and schemas have proper deserialisers, we can
simply deserialize a blueprint.
TODO: Our serialisation/deserialisation is safe with regards to
itself; i.e. it roundtrips. However, we only supports a subset of the
specified blueprint format. For example, we would fail to deserialize
blueprints that have inline data-schemas (we only use references).
This was a bit tricky and I ended up breaking things down a lot and
trying different path. This commit is the result of the most
satisfying one.
It introduces a new 'concept' and types: Definitions and Reference.
These elements are meant to reflect JSON pointers and JSON-schema
definitions which we now use for pretty much all user-defined
data-types.
In fact, Schemas are no longer inlined, but are always referencing
some schema under "definitions".
This indirection is necessary in order to cope with recursive types.
And while it's only truly necessary for recursive types, using it
consistently makes it both easier to produce and easier to consume.
---
The blueprint generation for recursive types here also works thanks to
the 'Definitions' data-structure wrapper around a BTreeMap. This uses
a strategy where:
(1) schemas are only generated if they haven't been seen before
(2) schemas are marked as seen BEFORE actually being generated (to
effectively stop a recursive generation).
This relies on one important aspect: the key must be uniquely
identifying a given schema. Which means that we have to monomorphize
data-types with generic parameters also here, and use keys that are
specialized in one data-type.
---
In this large overhaul we've also lost one thing which I didn't bother
re-introducing yet to keep the work manageable: title for record
fields. Before, we use to pull those from record constructor when
available, yet now, every record constructor has been replaced by a
`$ref`. We could theoritically attach a title to the reference. I'll
try to quickly add that in a later commit.
Tracing is now turn OFF by default when:
- building project
- building documentation
- building dependencies
It can be turned ON only when building project using `--keep-traces`.
That means it's not possible to build dependencies with traces. The
address `--rebuild` flag will also rebuild without traces.
Tracing is however turn ON by default when:
- checking the project (and running tests).
In this scenario, tracing can be disabled using `--no-traces` (if for
example, one want to analyze the execution units of specific functions
without having to manually remove traces from code).
We want the lookup to yield a result when there's only a single
validator; and no title is provided. So that users can simply do
'aiken address' in their project if it's unambiguous. The validator's
name is only required to disambiguate between multiple validators.
I also noticed that the order of arguments in with_validator was
wrong. Somehow.
And also return a structured output as JSON, so it's more easily used
by other tools.
```
Parsing script context
Simulating 78ec148ea647cf9969446891af31939c5d57b275a2455706782c6183ef0b62f1
Redeemer Spend → 0
{"mem":151993,"cpu":58180696}
```
This is still a bit clunky as the interface is expecting parameters in UPLC form and we don't do any kind of verification. So it is easy to shoot oneself in the foot at the moment (for example, to apply an integer into something that should have received a data). To be improved later.
This calculates a validator's address from validators found in a blueprint. It also provides a convenient way to attach a delegation part to the validator if needs be. The command is meant to provide a nice user experience and works 'out of the box' for projects that have only a single validator. Just call 'aiken address' to get the validator's address.
Note that the command-line doesn't provide any option to configure the target network. This automatically assumes testnet, and will until we deem the project ready for mainnet. Those brave enough to run an Aiken's program on mainnet will find a way anyway.
The blueprint is generated at the root of the repository and is
intended to be versioned with the rest. It acts as a business card
that contains many practical information. There's a variety of tools
we can then build on top of open-source contracts. And, quite
importantly, the blueprint is language-agnostic; it isn't specific to
Aiken. So it is really meant as an interop format within the
ecosystem.
There are restrictions regarding how modules are called, but given that packages are tight to repositories anyway; there's no way someone can publish and use an aiken package on 'aiken-lang' without being part of the organization. So the restriction on the command-line is pointless. Plus, it prevents us from using 'aiken-lang' as a placeholder name for tutorials.
This makes it easier to add new dependencies, without having to
manually edit the `aiken.toml` file.
The command is accessible via two different paths:
- aiken deps add
or simply
- aiken add
for this is quite common to find at the top-level of the command-line,
and, we still want to keep commands for managing dependencies grouped
under a command sub-group and not all at the top-level. So we're
merely promoting that one for visibility.
This is a bit cleaner, as the 'cmd/new' had many on-the-fly functions
which are better scoped inside this module.
Plus, it plays nicely with the std::str::FromStr trait definition.
This allows in case of issues with dependencies to at least safely
remove cached packages. Before that, it could be hard to know where
are even located the cached files without looking at the source code.
```
Clearing /Users/ktorz/Library/Caches/aiken/packages
Removing aiken-lang-stdlib-7ca9e659688ea88e1cfdc439b6c20c4c7fae9985.zip
Removing aiken-lang-stdlib-main@04eb45df3c77f6611bbdff842a0e311be2c56390f0fa01f020d69c93ff567fe5.zip
Removing aiken-lang-stdlib-6b482fa00ec37fe936c93155e8c670f32288a686.zip
Removing aiken-lang-stdlib-1cedbe85b7c7e9c4036d63d45cad4ced27b0d50b.zip
Done
```