Alongside a bunch of other stuff from the coverage list. In
particular, the mint transaction contains:
- reference inputs
- multiple outputs, with assets, and type-0, type-1 and type-6
addresses.
- an output with a datum hash
- an output with an inline script
- carries an extra datum witness, preimage of the embedded hash
- mint, with 2 policies purposely ordered wrongly, with 1 and 2
assets purposely ordered wrong. One of the mint is actually a
burn (i.e. negative quantity)
This is intense, as we still want to preserve the serializer for V1 &
V2, and I've tried as much as possible to avoid polluting the
application layer with many enum types such as:
```
pub enum TxOut {
V1(TransactionOutput),
V2(TransactionOutput),
V3(TransactionOutput),
}
```
Those types make working with the script context cumbersome, and are
only truly required to provide different serialisation strategies. So
instead, we keep one top-level `TxInfo V1/V2/V3` type, and we ensure
to pass serialization strategies as type wrappers.
This way, the strategy propagates through the structure up until it's
eliminated when it reaches the relevant types.
All-in-all, this strikes a correct balance between maintainability and
repetition; and it makes it possible to define _different but mostly
identical_ encoders for the various versions.
With it, I've been able to successfully encode a V3 script context and
match it against one produced using the Haskell libraries. More to
come.
Let's consider the following case:
```
type Var =
Integer
type Vars =
List<Var>
```
This incorrectly reports an infinite cycle; due to the inability to
properly type-check `Var` which is also a dependent var of `Vars`. Yet
the real issue here being that `Integer` is an unknown type.
This commit also upgrades miette to 7.2.0, so that we can also display
a better error output when the problem is actually a cycle.
The point of those tests is to ensure that blueprints are generated
properly, irrespective of the generated code. It is annoying to
constantly get those test failing every time we introduce an
optimization or something that would slightly change the generated
UPLC.
It is impossible to serialize/deserialize a Data with a negative
constructor. So the only way this can happen is by programmatically
construct a value using builtin constr_data.
While possible, it is entirely at the responsibility of the
programmer, but not malleable from an attacker who can only provide
values as 'Data' (and thus, must be decoded like others).
Cloning a 'Term' is potentially dangerous, so we don't want this to
happen by mistake. So instead, we pass in var names and turn them into
terms when necessary.
While the ledger doesn't allow deserializing negative constr value,
they are still possible at the machine level. So, we better make sure
that we don't make assumptions regarding this.