And so, even for unpinned package. In this case, we can't do a HEAD request. So we fallback by looking at what's available in the cache and using the most recently downloaded version from the cache. This is only a best effort as the most recently downloaded one may not be the actual latest. But common, this is a case where (a) someone didn't pin any version, (b) is trying to build on in an offline setup. We could possibly make that edge-case better but, let's see if anyone ever complains about it first.
When the version isn't a git sha or a tag, we always check that we got
the last version of a particular dependency before building. This is
to avoid those awkward moments where someone try to use something from
the stdlib that is brand new, and despite using 'main' they get a
strange build failure regarding how it's not available.
An important note is that we don't actually re-download the package
when the case occurs; we merely check an HTTP ETag from a (cheap) 'HEAD'
request on the package registry. If the tag hasn't changed then that
means the local version is correct.
The behavior is completely bypassed if the version is specified using
a git sha or a tag, as here, we can assume that fetching it once it
enough (and that it can change). If a package maintainer force-pushed
a tag however, there may be discrepency and the only way around that
is to `rm -r ./build`.
Best-effort to assert whether a version refers is a git sha digest or a tag. When it is, we
avoid re-downloading it if it's already fetched. But when it isn't, and thus refer to a branch,
we always re-download it. Note however that the download might be short-circuited by the
system-wide package cache, so a download doesn't actually mean a network request.
The package cache is however smart-enough to assert whether a package in the cache must be
re-downloaded (using HTTP ETag). So this is mostly about delegating the re-downloading logic to
the global packages cache.
fix: Opaque types are now properly handled in code gen (i.e. code gen functions, in datums/redeemers, in from data casts)
chore: add specific nested opaque type tests to code gen
Computes the policy ID of a minting policy; added guards for blueprint address to check that it's not a minting policy; Wasn't 100% sure where the errors should live, so I'm happy to move them if there's objections
fix: Issue where using var pattern in a when was passing the constr index instead of the constr
fix: Issue where expecting on a list had unexpected behaviors based on list length
closes#569
* added new methods to Definitions
it doesn't use expect
* lookup was failing for the special map/pair case
when resolving list generics
Co-authored-by: Pi <pi@sundaeswap.finance>
Negative numbers now show up as a constant instead of 0 - that number
Expect on constructors without field maps no longer panics
Expect on constructors with discard as assigned field names now no longer throws free unique
- [x] Show links to prelude, builtins and stdlib
- [x] Remove project 'owner' in the header (only show repository)
- [x] Fix type annotation overflow on mobile
- [x] Remove the prewrap mode on mobile
The apply command now works only from a serialized CBOR data (instead of a UPLC syntax). So it is no longer possible to specify arbitrary cbor terms through the CLI. I believe it to be an acceptable limitation for now; especially given that Aiken will never generate blueprints with non-data terms at the interface boundary.
These were needed before as a way to _partially deserialize_
blueprints. Indeed, some commands required accessing information of
the blueprint, but not necessarily the schema. So out of laziness (or
cleverness?), we only deserialized validators as serde::Value and
achieved that through the use of generics.
Now that validators and schemas have proper deserialisers, we can
simply deserialize a blueprint.
TODO: Our serialisation/deserialisation is safe with regards to
itself; i.e. it roundtrips. However, we only supports a subset of the
specified blueprint format. For example, we would fail to deserialize
blueprints that have inline data-schemas (we only use references).
This is needed in order to deserialize a JSON blueprint and use it to perform validation.
Still TODO:
- [ ] Write JSON deserializer for 'Schema'
Which should now be relatively straightforward.
* move uplc::ast::builder to uplc::builder
* rename aiken_lang::uplc to aiken_lang::gen_uplc
* move aiken_lang::air and aiken_lang::builder to aiken_lang::gen_uplc
as submodules
Co-authored-by: Kasey White <kwhitemsg@gmail.com>
* rename force_wrap to force
* add a bunch of builder methods to Term<Name>
* refactor one tiny location to show off builder methods
* split generate into `generate` and `generate_test`
* create wrap_as_multi_validator function
Co-authored-by: Kasey White <kwhitemsg@gmail.com>
This was a bit tricky and I ended up breaking things down a lot and
trying different path. This commit is the result of the most
satisfying one.
It introduces a new 'concept' and types: Definitions and Reference.
These elements are meant to reflect JSON pointers and JSON-schema
definitions which we now use for pretty much all user-defined
data-types.
In fact, Schemas are no longer inlined, but are always referencing
some schema under "definitions".
This indirection is necessary in order to cope with recursive types.
And while it's only truly necessary for recursive types, using it
consistently makes it both easier to produce and easier to consume.
---
The blueprint generation for recursive types here also works thanks to
the 'Definitions' data-structure wrapper around a BTreeMap. This uses
a strategy where:
(1) schemas are only generated if they haven't been seen before
(2) schemas are marked as seen BEFORE actually being generated (to
effectively stop a recursive generation).
This relies on one important aspect: the key must be uniquely
identifying a given schema. Which means that we have to monomorphize
data-types with generic parameters also here, and use keys that are
specialized in one data-type.
---
In this large overhaul we've also lost one thing which I didn't bother
re-introducing yet to keep the work manageable: title for record
fields. Before, we use to pull those from record constructor when
available, yet now, every record constructor has been replaced by a
`$ref`. We could theoritically attach a title to the reference. I'll
try to quickly add that in a later commit.
Having the data's schema be optional at the level of the 'Schema' did not allow to represent cases where there would be an opaque data at an arbitrary nesting. So I introduced a new variant 'Opaque' on 'Data' to fill that gap.