This was a bit more tricky than anticipated but played out nicely in
the end. Now we have one holistic way of parsing todos and errors
instead of it being duplicated between when/clause and sequence. The
error/todo parser has been moved up to the expression part rather than
being managed when parsing sequences. Not sure what motivated that to
begin with.
Fixes#621.
Alleviate a bit more the top-level expression parser. Note that we
probably need a bit more disciplined in what we export and at what level
because there doesn't seem to be much logic as for whether a parser is
private, exported to the crate only or to the wide open. I'd be in favor
of exporting everything by default.
Also moved the logic for 'int' and 'string' there though it is trivial. Yet, for bytearray, it tidies things nicely by removing them from the 'utils' module.
Equality on a union-type is potentially dangerous as the compiler won't
complain if we add a new case that we don't cover. Reversing the
assignment by yielding a `Token` for a given `AssignmentKind`. This way
we can use a pattern-match that got us covered for future cases.
The 'public' util was arguably not really adding much except a layer of indirection.
In the end, one useful parsing behavior to abstract is the idea of 'optional flag' that we use for both 'pub' and 'opaque' keywords.
Somehow, miette doesn't play well with spans when using chars indices.
So we have to count the number of bytes in strings / chars, so that
spans align accordingly.
Computes the policy ID of a minting policy; added guards for blueprint address to check that it's not a minting policy; Wasn't 100% sure where the errors should live, so I'm happy to move them if there's objections
fix: I thought namedDeBruijn takes advantage of Binder for encoding and decoding.
It does not...
fix: Debruijn was being converted to NamedDeBruijn incorrectly
It was not consuming the next case if there was no condition being checked in the clause.
Now it properly always consumes the next clause unless last clause.
Nothing to see here as they all have the same signature. Implementing
arithmetic bin-operators and boolean logic operators will require some
more logic.
This is simply a syntactic sugar which desugarize to a function call with two arguments mapped to the specified binary operator.
Only works for '>' at this stage as a PoC, extending to all binop in the next commit.
fix: Issue where using var pattern in a when was passing the constr index instead of the constr
fix: Issue where expecting on a list had unexpected behaviors based on list length
fix: rearrange clauses and fill in gaps now handles nested patterns in a uniform way
fix: discards in records was being sorted incorrectly leading to type issues
chore: remove some filter maps in cases where None is impossible anyway
chore: some refactoring on a couple functions to clean up
closes#569
* added new methods to Definitions
it doesn't use expect
* lookup was failing for the special map/pair case
when resolving list generics
Co-authored-by: Pi <pi@sundaeswap.finance>
closes#553
* rename flat to encode
* rename unflat to decode
* alias both to their old names
* both only print to stdout
use can pipe to file
* split cbor and hex flags
* hex flag works for either cbor or flat
* encode takes --to flag
[name, named-debruijn, debruijn]
* decode takes --from flag
[name, named-debruijn, debruijn]
This was causing a free unique. The fix is after stripping applied usage of identity,
we then check if it is passed around and if so we leave in the function declaration.
Negative numbers now show up as a constant instead of 0 - that number
Expect on constructors without field maps no longer panics
Expect on constructors with discard as assigned field names now no longer throws free unique
- [x] Show links to prelude, builtins and stdlib
- [x] Remove project 'owner' in the header (only show repository)
- [x] Fix type annotation overflow on mobile
- [x] Remove the prewrap mode on mobile
@MartinSchere noticed a weird error
where an unknown variable wasn't being reported
the type checker was incorrectly scoping
arguments for anonymous function definitions.
Luckily his compilation failed due to a FreeUnique
error during code gen which is good. But this may
have been the source of other mysterious FreeUnique
errors.
I also noticed that anonymous function allowed
arguments with the same name to be defined.
`fn(arg, arg)`
This now returns an error.
Thus allowing us to use code gen created functions to expect on data types including recursive ones.
Some minor tweaks to the air.
Added a uplc optimization for later.
The apply command now works only from a serialized CBOR data (instead of a UPLC syntax). So it is no longer possible to specify arbitrary cbor terms through the CLI. I believe it to be an acceptable limitation for now; especially given that Aiken will never generate blueprints with non-data terms at the interface boundary.
In the same spirit of the existing Term builder; I also added a `data`
method to lift a `PlutusData` into a `Term<T>` and generalized a bit
the builder to only require a `Term<Name>` when necessary and remain
generic otherwise.
The `PlutusData` builder could potentially be upstreamed to pallas
diretly.
These were needed before as a way to _partially deserialize_
blueprints. Indeed, some commands required accessing information of
the blueprint, but not necessarily the schema. So out of laziness (or
cleverness?), we only deserialized validators as serde::Value and
achieved that through the use of generics.
Now that validators and schemas have proper deserialisers, we can
simply deserialize a blueprint.
TODO: Our serialisation/deserialisation is safe with regards to
itself; i.e. it roundtrips. However, we only supports a subset of the
specified blueprint format. For example, we would fail to deserialize
blueprints that have inline data-schemas (we only use references).
This is needed in order to deserialize a JSON blueprint and use it to perform validation.
Still TODO:
- [ ] Write JSON deserializer for 'Schema'
Which should now be relatively straightforward.
Was originally written as a way to fix a failing property test on the
program_builder; but the program builder is now gone. This function
is still useful to have around.
Params being unused were being incorrectly reported.
This was because params need to be initialized
at a scope above both the validator functions. This
manifested when using a multi-validator where one of
the params was not used in both validators.
The easy fix was to add a field called
`is_validator_param` to `ArgName`. Then
when infering a function we don't initialize args
that are validator params. We now handle this
in a scope that is created before in the match branch for
validator in the `infer_definition` function. In there
we call `.in_new_scope` and initialize params for usage
detection.
Fixes#472.
This also partially addresses #195. However, as pointed out in one of
the comment, there's no 'official rule' when it comes to what should
be considered valid escape sequences. Haskell relies mostly on the
AttoParsec library and Rust also has its own set of rules.
This is in particular true for unicode escape sequences, but there is
a common middleground for some usual single character escapes such as
\n or \\. So we now at least support these.
For more complicated escape sequence, please refer to #195 for now and
keep the discussion going there.
One involving zero args being hoisted instead of compiled and replaced.
Second involving not updating a function's dependeny function scope. Which then hoisted to a lower scope and caused free unique
* new module scope which holds some ancestor logic
* rework some things to truly hide scope increments
Co-authored-by: Kasey White <kwhitemsg@gmail.com>
* move uplc::ast::builder to uplc::builder
* rename aiken_lang::uplc to aiken_lang::gen_uplc
* move aiken_lang::air and aiken_lang::builder to aiken_lang::gen_uplc
as submodules
Co-authored-by: Kasey White <kwhitemsg@gmail.com>
* rename force_wrap to force
* add a bunch of builder methods to Term<Name>
* refactor one tiny location to show off builder methods
* split generate into `generate` and `generate_test`
* create wrap_as_multi_validator function
Co-authored-by: Kasey White <kwhitemsg@gmail.com>
And disable multi-patterns clauses. I was originally just controlling
whether we did disable that from the parser but then I figured we
could actually support multi-patterns clauses quite easily by simply
desugaring a multi-pattern into multiple clauses.
This is only a syntactic sugar, which means that the cost of writing
that on-chain is as expensive as writing the fully expanded form; yet
it seems like a useful shorthand; especially for short clause
expressions.
This commit however disables multi-pattern when clauses, which we do
not support in the code-generation. Instead, one pattern on tuples for
that.
Isolated doc comments causes the compiler to panic with:
```
'no consecutive empty lines'
```
This is reproducible when doc comments are wrapped in sandwich between
comments and newlines.
The typed-AST produced as a result of type-checking the program will
no longer contain unused let-bindings. They still raise warnings in
the code so that developers are aware that they are being ignore.
This is mainly done to prevent mistakes for people coming from an
imperative background who may think that things like:
```
let _ = foo(...)
```
should have some side-effects. It does not, and it's similar to
assigned variables that are never used / evaluated. We now properly
strip those elements from the AST when encountered and raise proper
warnings, even for discarded values.
It's generally a bad idea to use equality on enum variants because this won't trigger any compiler errors in the future yet could have hazardous effects if adding new variants. So it's usually preferrable to use exauhstive pattern matching and let the compiler warn missing cases in places where it matters.
This leads to more consistent formatting across entire Aiken programs.
Before that commit, only long expressions would be formatted on a
newline, causing non-consistent formatting and additional reading
barrier when looking at source code.
Programs also now take more vertical space, which is better for more
friendly diffing in version control systems (especially git).
It is now possible to leave a hole in a type annotation and have the compiler fill-in the expected type of us.
This is a pretty useful debugging tool when playing with complex functions.
The difference between 'FlexBreak' and 'Break(Mode::Strict/Flexible)' as always confused me; and turned out that the 'FlexBreak' thingy is never used. This is dead-code, so I removed it.
Rules are now as follows:
- If a pipeline contains a newline, then the entire pipeline is formatted over multiple lines.
- If it doesn't, then it's formatted as a single-line UNLESS it cannot fit; in which case, we fallback to multiline again.
This was a bit tricky and I ended up breaking things down a lot and
trying different path. This commit is the result of the most
satisfying one.
It introduces a new 'concept' and types: Definitions and Reference.
These elements are meant to reflect JSON pointers and JSON-schema
definitions which we now use for pretty much all user-defined
data-types.
In fact, Schemas are no longer inlined, but are always referencing
some schema under "definitions".
This indirection is necessary in order to cope with recursive types.
And while it's only truly necessary for recursive types, using it
consistently makes it both easier to produce and easier to consume.
---
The blueprint generation for recursive types here also works thanks to
the 'Definitions' data-structure wrapper around a BTreeMap. This uses
a strategy where:
(1) schemas are only generated if they haven't been seen before
(2) schemas are marked as seen BEFORE actually being generated (to
effectively stop a recursive generation).
This relies on one important aspect: the key must be uniquely
identifying a given schema. Which means that we have to monomorphize
data-types with generic parameters also here, and use keys that are
specialized in one data-type.
---
In this large overhaul we've also lost one thing which I didn't bother
re-introducing yet to keep the work manageable: title for record
fields. Before, we use to pull those from record constructor when
available, yet now, every record constructor has been replaced by a
`$ref`. We could theoritically attach a title to the reference. I'll
try to quickly add that in a later commit.
Having the data's schema be optional at the level of the 'Schema' did not allow to represent cases where there would be an opaque data at an arbitrary nesting. So I introduced a new variant 'Opaque' on 'Data' to fill that gap.
These functions relied on the same dependency and had the same scope. So insertion was by encounter rather than order determined by dependency handling. Now we switched to dependency order is prioritized to prevent free unique.
-Builitins IR now acts like Record IR in terms of argument consumption
-UnConstrData returns as Pair(Data,Data) to conform with how pairs are treated behind the scenes.
This has been removed from the CIP-0057 specification since validators
are often re-used for multiple purposes (especially validators with
arity 2). It's misleading to assign a validator a purpose since the
purpose distinction actually happens _within_ the validator itself.