Before this commit, we would always show the 'declared form' of type aliases, with their generic, non-instantiated parameters. This now tries to unify the annotation with the underlying inferred type to provide even better alias pretty printing.
We cannot enforce internal invariants on opaque types from only structural checks on Data. Thus, it is forbidden to find an opaque type in an outward-facing interface. Instead, users should rely on intermediate representations and lift them into opaque types using constructors and methods provided by the type (e.g. Dict.from_list, Rational.from_int, Rational.new, ...)
closes#569
* added new methods to Definitions
it doesn't use expect
* lookup was failing for the special map/pair case
when resolving list generics
Co-authored-by: Pi <pi@sundaeswap.finance>
This is needed in order to deserialize a JSON blueprint and use it to perform validation.
Still TODO:
- [ ] Write JSON deserializer for 'Schema'
Which should now be relatively straightforward.
This was a bit tricky and I ended up breaking things down a lot and
trying different path. This commit is the result of the most
satisfying one.
It introduces a new 'concept' and types: Definitions and Reference.
These elements are meant to reflect JSON pointers and JSON-schema
definitions which we now use for pretty much all user-defined
data-types.
In fact, Schemas are no longer inlined, but are always referencing
some schema under "definitions".
This indirection is necessary in order to cope with recursive types.
And while it's only truly necessary for recursive types, using it
consistently makes it both easier to produce and easier to consume.
---
The blueprint generation for recursive types here also works thanks to
the 'Definitions' data-structure wrapper around a BTreeMap. This uses
a strategy where:
(1) schemas are only generated if they haven't been seen before
(2) schemas are marked as seen BEFORE actually being generated (to
effectively stop a recursive generation).
This relies on one important aspect: the key must be uniquely
identifying a given schema. Which means that we have to monomorphize
data-types with generic parameters also here, and use keys that are
specialized in one data-type.
---
In this large overhaul we've also lost one thing which I didn't bother
re-introducing yet to keep the work manageable: title for record
fields. Before, we use to pull those from record constructor when
available, yet now, every record constructor has been replaced by a
`$ref`. We could theoritically attach a title to the reference. I'll
try to quickly add that in a later commit.
Having the data's schema be optional at the level of the 'Schema' did not allow to represent cases where there would be an opaque data at an arbitrary nesting. So I introduced a new variant 'Opaque' on 'Data' to fill that gap.
Here's a trick though: I got lazy (a bit) and did not write a full deserializer for Schema because this is busywork and not at all necessary at this stage. Instead, I've made the blueprint parameterized by a generic type <T>; which represents the type of the underlying blueprint's schema. When deserializing from JSON, we can default to 'Value' to get a free deserializer. Since all we're interested about is the program and the metadata (purpose and title) of a validator, it works nicely.
Serialization however expects a Blueprint<Schema>, and most of the functions operates over a Blueprint<Schema> anyway.
This also now introduce two levels of representable types (because it's needed at least for tuples):
Plutus Data (a.k.a Data) and UPLC primitives / constants (a.k.a Schema).
In practice, we don't want to specify blueprints that use direct UPLC primitives because there's little support for producing those in the ecosystem. So we should aim for producing only Data whenever we can. Yet we don't want to forbid it either in case people know what they're doing. Which means that we need to capture that difference well in the type modelling (in Rust and in the CIP-0057 specification).
I've also simplified the error type for now, just to provide some degree of feedback while working on this. I'll refine it later with proper errors.
The blueprint is generated at the root of the repository and is
intended to be versioned with the rest. It acts as a business card
that contains many practical information. There's a variety of tools
we can then build on top of open-source contracts. And, quite
importantly, the blueprint is language-agnostic; it isn't specific to
Aiken. So it is really meant as an interop format within the
ecosystem.