Constants are like tiny programs, so they are bound by the same rules
as validators and other programs. In fact, functions are slightly more
flexible in that they allow generic constant expressions like
`List<a>`.
Yet, there is no way to contain such generic structure that contain
inhabitants in a way that satisfies the type-checker. In the example
of `List<a>`, the only inhabitant of that type that we can construct
is the empty list. Anything else would require holding onto some
generic value.
In addition, we can't force literal values into generic annotation, as
something like:
```
const foo: List<a> = [1, 2, 3]
```
wouldn't type-check either since the right-side would unify to
`List<Int>`. And again, the only right-hand side that can type-check
is the empty list without any inhabitant.
The added restriction on generic function is necessary because while
we allow constants to return lambda, we cannot (easily) generate UPLC
that is generic in its argument. By the time we generate UPLC, the
underlying types have to be known.
Without that, the language server might trigger awkward warnings about
unused imports due to trace stripping. So it's better to compile/check
projects in the language server in the most expanded form.
This is not a "proper" fix as it simply get rid of the warning
altogether (whether you use or not the destructured values).
The reason for removing the warning entirely is because (1) it's
simpler, but more so (2) there's no impact on the final code produced
_anyway_. Redundant let bindings are already removed by the compiler;
and while it's an implicit behaviour that requires a proper warning
when it's coming from a user-defined assignment; here the redundant
assignment is introduced by the compiler to begin with as another
implicit behavior!
So we have an implicit behaviour triggering a warning on another
implicit behaviour. Truth is, there's no impact in having those
parameters destructured and unused. So since users are already not
aware that this results in an implicit let assignment being inserted
in place for them; there's no need for the warning at all.
Avoid the interface to hang for several seconds without feedback when counterexamples are being simplified. This sends a heads-up to the user to indicate that a research of a counter example is going on.
Funny enough, we thought about that but only across packages. Now, the
situation gets a little tricky because of folder structure, it's easy
to define a module "foo" in `env`, `lib` and/or `validators`. From the
compiler's perspective, they all have the same name.
For recursive structures like Tuples, the span itself isn't enough to
ensure uniqueness of elements (in particular tuples) holding elements
of the same type.
This is only a start. It compiles, but with a few TODOs left open. In particular, it doesn't currently handle constants depending on other constants or functions; nor does it hoist constants.
The playground doesn't / cannot depend on aiken-project because that becomes a gigantic pain. So instead, we try to keep essential stuff inside aiken-lang when possible.
Technically, we always need a fallback just because the way the UPLC
is going to work. The last case in the handler pattern matching is
always going to be else ...
We could optimize that away and when the validator is exhaustive, make
the last handler the fallback. Yet, it's really a micro optimization
that saves us one extra if/else. So the sake of getting things
working, we always assume that there's a fallback but, with the extra
condition that when the validator is exhaustive (i.e. there's a
handler covering all purposes), the fallback HAS TO BE the default
fallback (i.e. (_) => fail).
This allows us to gracefully format it out, and also raise an error in
case where there's an extraneous custom fallback.
When there's no type annotation in a validator handler signature, we
provide default annotation to help the type-checker. However, for
spend's datum and mint policy_id, those annotations mustn't be `Data`,
but rather Option<Data> and Bytearray.
Without that, when no annotation are provided, the compiler infer
invalid types and fails with incongruous errors.
This is a little trick which detects record access and replace them
with a simple var. The var itself is the validator handler name,
though since it contains dots, it cannot be referred to by users
explicitly. Yet fundamentally, it is semantically equivalent to just
calling the function by its name.
Note that this commit also removes the weird backdoor for allowing
importing validators in modules starting with `tests`. Allowing
validators handler to be used in importable module requires more work
and is arguably useful; so we will wait until someone complain and
reconsider the proper way to do it.
Unfortunately, as documented in:
https://github.com/IntersectMBO/cardano-ledger/issues/4571
Some Option fields in the script context certificates are going to
remain set to None, at least until the next Hard fork. There's a risk
that people permanently lock their funds if they expect deposits on
registration credentials to ever be `Some`.
So, we introduce a special type that emulate an `Option` that can only
ever be `None`. We call it `Never` and it is the first type of this
kind (i.e. with constructors indexes not starting at 0).
Without that, we may encounter weird error messages when writing
validators without an explicit `else`. Since we automatically fill it
with a `fail`; without annotation, it unifies to a generic parameter.
The existing check that would look for the body being an error term is
ill-advised as it doesn't work as soon as one adds tracing, or make
the validator a parameterized validator. Plus, it may simply trigger
the wrong behavior as one can now annotate a validator with _whatever_
and get pass the type-checker by plucking a `fail` keyword as body.
The rationale is two folds:
1. It's more consistent with how we already separate the validator
name from its module.
2. Because `_` may be present in Aiken's validator's name, it is hard
to read and I am afraid it could potentially conflict later on. So
it's better to use a separator that cannot appear in validator
names.
Some remains invalid, in particular:
- We need to handle the annotated Data case, which we still parse
correctly but do nothing about any longer.
- There's also a strange behavior with opaque type turned public?
- We now consistently desugar an expect in the last position as
`Void`. Regardless of the pattern. Desugaring to a boolean value is
deemed too confusing.
- This commit also removes the desugaring for let-binding. It's only
ever allowed for _expect_ which then behaves like a side effect.
- We also now allow tests to return either `Bool` or `Void`. A test
that returns `Void` is treated the same as a test returning `True`.
This is debatable, but I would argue that it's been sufficiently
annoying for people and such a low-hanging fruit that we ought to do
something about it.
The strategy here is simple: when we find a sequence of expression
that ends with an assignment (let or expect), we can simply desugar it
into two expressions: the assignment followed by either `Void` or a
boolean.
The latter is used when the assignment pattern is itself a boolean;
the next boolean becomes the expected value. The former, `Void`, is
used for everything else. So said differently, any assignment
implicitly _returns Void_, except for boolean which return the actual
patterned bool.
<table>
<thead><tr><th>expression</th><th>desugar into</th></tr></thead>
<tbody>
<tr>
<td>
```aiken
fn expect_bool(data: Data) -> Void {
expect _: Bool = data
}
```
</td>
<td>
```aiken
fn expect_bool(data: Data) -> Void {
expect _: Bool = data
Void
}
```
</td>
</tr>
<tr>
<td>
```aiken
fn weird_maths() -> Bool {
expect 1 == 2
}
```
</td>
<td>
```aiken
fn weird_maths() -> Bool {
expect True = 1 == 2
True
}
```
</td>
</tr>
</tbody>
</table>
This is useful when splitting module for dependencies, yet without the desire to expose internal constructors and types. This merely skips the documentation generation; but doesn't prevent the hidden module from being accessible.
The idea is pretty simple, we'll just look for lines starting with Markdown heading sections, and render them in the documentation. They appear both in the sidebar, and within the generated docs themselves in between functions. This, coupled with the order preservation of the declaration in a module should make the generated docs significantly more elegant to organize and present.
The rationale is to let them in the order they are defined, so that
library authors have some freedom in how they present information. On
top of that, we'll also now parse specifi comments as section headers
that will be inserted in the sidebar when present.
As well as fixing a couple of other issues thanks to conformance
tests. Some functions like multiply_integer or verify_ed25519_signature
have also slightly changed their costing function.
There were some odd discrepancy for `integerToByteString` on the mem
side. Either 1 or about 1000 mem units off; which I couldn't quite
figure out. Yet, it proves useful to validate builtin at large and
ensure we have a valid cost model for v3.
This covers every proposal procedures but protocol parameters, this
one is yet to be done. It spans over 30+ fields, and felt like it is a
big enough piece to tackle it on its own.
Alongside a bunch of other stuff from the coverage list. In
particular, the mint transaction contains:
- reference inputs
- multiple outputs, with assets, and type-0, type-1 and type-6
addresses.
- an output with a datum hash
- an output with an inline script
- carries an extra datum witness, preimage of the embedded hash
- mint, with 2 policies purposely ordered wrongly, with 1 and 2
assets purposely ordered wrong. One of the mint is actually a
burn (i.e. negative quantity)
This is intense, as we still want to preserve the serializer for V1 &
V2, and I've tried as much as possible to avoid polluting the
application layer with many enum types such as:
```
pub enum TxOut {
V1(TransactionOutput),
V2(TransactionOutput),
V3(TransactionOutput),
}
```
Those types make working with the script context cumbersome, and are
only truly required to provide different serialisation strategies. So
instead, we keep one top-level `TxInfo V1/V2/V3` type, and we ensure
to pass serialization strategies as type wrappers.
This way, the strategy propagates through the structure up until it's
eliminated when it reaches the relevant types.
All-in-all, this strikes a correct balance between maintainability and
repetition; and it makes it possible to define _different but mostly
identical_ encoders for the various versions.
With it, I've been able to successfully encode a V3 script context and
match it against one produced using the Haskell libraries. More to
come.