This is very very rough at the moment. But it does a couple of thing:
1. The 'ArgVia' now contains an Expr/TypedExpr which should unify to a Fuzzer. This is to avoid having to introduce custom logic to handle fuzzer referencing. So this now accepts function call, field access etc.. so long as they unify to the right thing.
2. I've done quite a lot of cleanup in aiken-project mostly around the tests and the naming surrounding them. What we used to call 'Script' is now called 'Test' and is an enum between UnitTest (ex-Script) and PropertyTest. I've moved some boilerplate and relevant function under those module Impl.
3. I've completed the end-to-end pipeline of:
- Compiling the property test
- Compiling the fuzzer
- Generating an initial seed
- Running property tests sequentially, threading the seed through each step.
An interesting finding is that, I had to wrap the prop test in a similar wrapper that we use for validator, to ensure we convert primitive types wrapped in Data back to UPLC terms. This is necessary because the fuzzer return a ProtoPair (and soon an Array) which holds 'Data'.
At the moment, we do nothing with the size, though the size should ideally grow after each iteration (up to a certain cap).
In addition, there are a couple of todo/fixme that I left in the code as reminders of what's left to do beyond the obvious (error and success reporting, testing, etc..)
The parameter is special as it takes no annotation but a 'via' keyword followed by an expression that should unify to a Fuzzer<a>, where Fuzzer<a> = fn(Seed) -> (Seed, a). The current commit only allow name identifiers for now. Ultimately, this may allow full expressions.
We cannot enforce internal invariants on opaque types from only structural checks on Data. Thus, it is forbidden to find an opaque type in an outward-facing interface. Instead, users should rely on intermediate representations and lift them into opaque types using constructors and methods provided by the type (e.g. Dict.from_list, Rational.from_int, Rational.new, ...)
This commit allows Data to be optionally annotated with a
phantom-type. This doesn't change anything in codegen but we can now
leverage this information to generate better blueprint schemas.
This allows for a more fine-grained control over how the traces are showed. Now users can instrument the compiler to preserve only their user-defined traces, or the only the compiler, or all, or none. We also want to add another trace level on top of that: 'compact' to only show line numbers; which will work for both user-defined and/or compiler-generated traces.
This is a *slight* hack / abuse of the code() method as we are now
doing a bit of formatting within that function. Yet, we only do so at
the very top-level (i.e. project's Error) because we can't actually
fiddle with how miette presents errors.
Also removed the 'clear' flag to do it by default instead of clogging
the terminal view.
This now works pretty nicely, and the logic is back under
`aiken_project`.
Rather than have this logic in the aiken binary, this provides a generic
mechanism to do "something" on file change events. KtorZ is going to
handle wiring it up to the CLI in the best way for the project.
I tried to write some tests for this, but it's hard to isolate the
watcher logic without wrestling with the borrow checker, or overly
neutering this utility.
- Add support to the formatter for these doc comments
- Add a new field to `Arg` `doc: Option<String>`
- Don't attach docs immediately after typechecking a module
- instead we should do it on demand in docs, build, and lsp
- the check command doesn't need to have any docs attached
- doing it more lazily defers the computation until later making
typechecking feedback a bit faster
- Add support for function arg and validator param docs in
`attach_module_docs` methods
- Update some snapshots
- Add put_doc to Arg
closes#685
The 'HEAD' call that is done to resolve package revisions from
unpinned versions is already quite cheap, but it would still be better
to avoid overloading Github with such calls; especially for users of a
language-server that would compile on-the-fly very often. Upstream
packages don't change often so there's no need to constantly check the
etag.
So we now keep a local version of etags that we fetched, as well as a
timestamp from the last time we fetched them so that we only re-fetch
them if more than an hour has elapsed. This should be fairly resilient
while still massively improving the UX for people showing up after a
day and trying to use latest 'main' features.
This means that we now effectively have two caching levels:
- In the manifest, we store previously fetched etags.
- In the filesystem, we have a cache of already downloaded zip archives.
The first cache is basically invalidated every hour, while the second
cache is only invalidated when a etag changes. For pinned versions,
nothing is invalidated as they are considered immutable.
And so, even for unpinned package. In this case, we can't do a HEAD request. So we fallback by looking at what's available in the cache and using the most recently downloaded version from the cache. This is only a best effort as the most recently downloaded one may not be the actual latest. But common, this is a case where (a) someone didn't pin any version, (b) is trying to build on in an offline setup. We could possibly make that edge-case better but, let's see if anyone ever complains about it first.
When the version isn't a git sha or a tag, we always check that we got
the last version of a particular dependency before building. This is
to avoid those awkward moments where someone try to use something from
the stdlib that is brand new, and despite using 'main' they get a
strange build failure regarding how it's not available.
An important note is that we don't actually re-download the package
when the case occurs; we merely check an HTTP ETag from a (cheap) 'HEAD'
request on the package registry. If the tag hasn't changed then that
means the local version is correct.
The behavior is completely bypassed if the version is specified using
a git sha or a tag, as here, we can assume that fetching it once it
enough (and that it can change). If a package maintainer force-pushed
a tag however, there may be discrepency and the only way around that
is to `rm -r ./build`.
Best-effort to assert whether a version refers is a git sha digest or a tag. When it is, we
avoid re-downloading it if it's already fetched. But when it isn't, and thus refer to a branch,
we always re-download it. Note however that the download might be short-circuited by the
system-wide package cache, so a download doesn't actually mean a network request.
The package cache is however smart-enough to assert whether a package in the cache must be
re-downloaded (using HTTP ETag). So this is mostly about delegating the re-downloading logic to
the global packages cache.
fix: Opaque types are now properly handled in code gen (i.e. code gen functions, in datums/redeemers, in from data casts)
chore: add specific nested opaque type tests to code gen
Computes the policy ID of a minting policy; added guards for blueprint address to check that it's not a minting policy; Wasn't 100% sure where the errors should live, so I'm happy to move them if there's objections
fix: Issue where using var pattern in a when was passing the constr index instead of the constr
fix: Issue where expecting on a list had unexpected behaviors based on list length
closes#569
* added new methods to Definitions
it doesn't use expect
* lookup was failing for the special map/pair case
when resolving list generics
Co-authored-by: Pi <pi@sundaeswap.finance>
Negative numbers now show up as a constant instead of 0 - that number
Expect on constructors without field maps no longer panics
Expect on constructors with discard as assigned field names now no longer throws free unique
- [x] Show links to prelude, builtins and stdlib
- [x] Remove project 'owner' in the header (only show repository)
- [x] Fix type annotation overflow on mobile
- [x] Remove the prewrap mode on mobile
The apply command now works only from a serialized CBOR data (instead of a UPLC syntax). So it is no longer possible to specify arbitrary cbor terms through the CLI. I believe it to be an acceptable limitation for now; especially given that Aiken will never generate blueprints with non-data terms at the interface boundary.
These were needed before as a way to _partially deserialize_
blueprints. Indeed, some commands required accessing information of
the blueprint, but not necessarily the schema. So out of laziness (or
cleverness?), we only deserialized validators as serde::Value and
achieved that through the use of generics.
Now that validators and schemas have proper deserialisers, we can
simply deserialize a blueprint.
TODO: Our serialisation/deserialisation is safe with regards to
itself; i.e. it roundtrips. However, we only supports a subset of the
specified blueprint format. For example, we would fail to deserialize
blueprints that have inline data-schemas (we only use references).
This is needed in order to deserialize a JSON blueprint and use it to perform validation.
Still TODO:
- [ ] Write JSON deserializer for 'Schema'
Which should now be relatively straightforward.
* move uplc::ast::builder to uplc::builder
* rename aiken_lang::uplc to aiken_lang::gen_uplc
* move aiken_lang::air and aiken_lang::builder to aiken_lang::gen_uplc
as submodules
Co-authored-by: Kasey White <kwhitemsg@gmail.com>
* rename force_wrap to force
* add a bunch of builder methods to Term<Name>
* refactor one tiny location to show off builder methods
* split generate into `generate` and `generate_test`
* create wrap_as_multi_validator function
Co-authored-by: Kasey White <kwhitemsg@gmail.com>
This was a bit tricky and I ended up breaking things down a lot and
trying different path. This commit is the result of the most
satisfying one.
It introduces a new 'concept' and types: Definitions and Reference.
These elements are meant to reflect JSON pointers and JSON-schema
definitions which we now use for pretty much all user-defined
data-types.
In fact, Schemas are no longer inlined, but are always referencing
some schema under "definitions".
This indirection is necessary in order to cope with recursive types.
And while it's only truly necessary for recursive types, using it
consistently makes it both easier to produce and easier to consume.
---
The blueprint generation for recursive types here also works thanks to
the 'Definitions' data-structure wrapper around a BTreeMap. This uses
a strategy where:
(1) schemas are only generated if they haven't been seen before
(2) schemas are marked as seen BEFORE actually being generated (to
effectively stop a recursive generation).
This relies on one important aspect: the key must be uniquely
identifying a given schema. Which means that we have to monomorphize
data-types with generic parameters also here, and use keys that are
specialized in one data-type.
---
In this large overhaul we've also lost one thing which I didn't bother
re-introducing yet to keep the work manageable: title for record
fields. Before, we use to pull those from record constructor when
available, yet now, every record constructor has been replaced by a
`$ref`. We could theoritically attach a title to the reference. I'll
try to quickly add that in a later commit.
Having the data's schema be optional at the level of the 'Schema' did not allow to represent cases where there would be an opaque data at an arbitrary nesting. So I introduced a new variant 'Opaque' on 'Data' to fill that gap.
This has been removed from the CIP-0057 specification since validators
are often re-used for multiple purposes (especially validators with
arity 2). It's misleading to assign a validator a purpose since the
purpose distinction actually happens _within_ the validator itself.
Tracing is now turn OFF by default when:
- building project
- building documentation
- building dependencies
It can be turned ON only when building project using `--keep-traces`.
That means it's not possible to build dependencies with traces. The
address `--rebuild` flag will also rebuild without traces.
Tracing is however turn ON by default when:
- checking the project (and running tests).
In this scenario, tracing can be disabled using `--no-traces` (if for
example, one want to analyze the execution units of specific functions
without having to manually remove traces from code).
This caused me some trouble. In my first approach, I ended up having
multiple traces because nested values would be evaluated twice; once
as condition, and once as part of the continuation.
To prevent this, we can simply evaluate the condition once, and return
plain True / False boolean as outcome. So this effectively transforms any
expression:
```
expr
```
as
```
if expr { True } else { trace("...", False) }
```
We want the lookup to yield a result when there's only a single
validator; and no title is provided. So that users can simply do
'aiken address' in their project if it's unambiguous. The validator's
name is only required to disambiguate between multiple validators.
I also noticed that the order of arguments in with_validator was
wrong. Somehow.