This is useful when splitting module for dependencies, yet without the desire to expose internal constructors and types. This merely skips the documentation generation; but doesn't prevent the hidden module from being accessible.
The idea is pretty simple, we'll just look for lines starting with Markdown heading sections, and render them in the documentation. They appear both in the sidebar, and within the generated docs themselves in between functions. This, coupled with the order preservation of the declaration in a module should make the generated docs significantly more elegant to organize and present.
The rationale is to let them in the order they are defined, so that
library authors have some freedom in how they present information. On
top of that, we'll also now parse specifi comments as section headers
that will be inserted in the sidebar when present.
As well as fixing a couple of other issues thanks to conformance
tests. Some functions like multiply_integer or verify_ed25519_signature
have also slightly changed their costing function.
There were some odd discrepancy for `integerToByteString` on the mem
side. Either 1 or about 1000 mem units off; which I couldn't quite
figure out. Yet, it proves useful to validate builtin at large and
ensure we have a valid cost model for v3.
This covers every proposal procedures but protocol parameters, this
one is yet to be done. It spans over 30+ fields, and felt like it is a
big enough piece to tackle it on its own.
Alongside a bunch of other stuff from the coverage list. In
particular, the mint transaction contains:
- reference inputs
- multiple outputs, with assets, and type-0, type-1 and type-6
addresses.
- an output with a datum hash
- an output with an inline script
- carries an extra datum witness, preimage of the embedded hash
- mint, with 2 policies purposely ordered wrongly, with 1 and 2
assets purposely ordered wrong. One of the mint is actually a
burn (i.e. negative quantity)
This is intense, as we still want to preserve the serializer for V1 &
V2, and I've tried as much as possible to avoid polluting the
application layer with many enum types such as:
```
pub enum TxOut {
V1(TransactionOutput),
V2(TransactionOutput),
V3(TransactionOutput),
}
```
Those types make working with the script context cumbersome, and are
only truly required to provide different serialisation strategies. So
instead, we keep one top-level `TxInfo V1/V2/V3` type, and we ensure
to pass serialization strategies as type wrappers.
This way, the strategy propagates through the structure up until it's
eliminated when it reaches the relevant types.
All-in-all, this strikes a correct balance between maintainability and
repetition; and it makes it possible to define _different but mostly
identical_ encoders for the various versions.
With it, I've been able to successfully encode a V3 script context and
match it against one produced using the Haskell libraries. More to
come.
Let's consider the following case:
```
type Var =
Integer
type Vars =
List<Var>
```
This incorrectly reports an infinite cycle; due to the inability to
properly type-check `Var` which is also a dependent var of `Vars`. Yet
the real issue here being that `Integer` is an unknown type.
This commit also upgrades miette to 7.2.0, so that we can also display
a better error output when the problem is actually a cycle.
The point of those tests is to ensure that blueprints are generated
properly, irrespective of the generated code. It is annoying to
constantly get those test failing every time we introduce an
optimization or something that would slightly change the generated
UPLC.
It is impossible to serialize/deserialize a Data with a negative
constructor. So the only way this can happen is by programmatically
construct a value using builtin constr_data.
While possible, it is entirely at the responsibility of the
programmer, but not malleable from an attacker who can only provide
values as 'Data' (and thus, must be decoded like others).
Cloning a 'Term' is potentially dangerous, so we don't want this to
happen by mistake. So instead, we pass in var names and turn them into
terms when necessary.
While the ledger doesn't allow deserializing negative constr value,
they are still possible at the machine level. So, we better make sure
that we don't make assumptions regarding this.
This isn't sufficient however, as the 'assignment' helper handling
code generation doesn't perform any check when patterns are vars. This
is curious, and need to be investigated further.
Let's consider the following case:
```
type Var =
Integer
type Vars =
List<Var>
```
This incorrectly reports an infinite cycle; due to the inability to
properly type-check `Var` which is also a dependent var of `Vars`. Yet
the real issue here being that `Integer` is an unknown type.
This commit also upgrades miette to 7.2.0, so that we can also display
a better error output when the problem is actually a cycle.
The spirit here is to make it easier to discover this syntax. People
have different intuition about it and the single pipe may not be the
most obvious one.
It is however the recommended syntax, and the formatter will rewrite
any of the other to it.
The syntax is as follows:
{ "bytes" = "...", "encoding" = "<encoding>" }
The following encoding are accepted:
"utf8", "utf-8", "hex", "base16"
Note: the duplicates are only there to make it easier for people to
discover them by accident. When "hex" (resp. "base16") is specified,
the bytes string will be decoded and must be a valid hex string.
This is currently extremely limited as it only supports (UTF-8)
bytearrays and integers. We should seek to at least support hex bytes
sequences, as well as bools, lists and possibly options.
For the latter, we the rework on constant outlined in #992 is
necessary.
This is less confusing that getting an 'UnknownModule' error reporting
even a different module name than the one actually being important
('env').
Also, this commit fixes a few errors found in the type-checker
when reporting 'UnknownModule' errors. About half the time, we would
actually attached _imported modules_ instead of _importable modules_
to the error, making the neighboring suggestion quite worse (nay
useless).
We figure out dependencies by looking at 'use' definition in parsed
modules. However, in the case of environment modules, we must consider
all of them when seeing "use env". Without that, the env modules are
simply compiled in parallel and may not yet have been compiled when
they are needed as actual dependencies.
We simply provide a flag with a free-form output which acts as
the module to lookup in the 'env' folder. The strategy is to replace
the environment module name on-the-fly when a user tries to import
'env'.
If the environment isn't found, an 'UnknownModule' error is raised
(which I will slightly adjust in a following commits to something more
related to environment)
There are few important consequences to this design which may not seem
immediately obvious:
1. We parse and type-check every env modules, even if they aren't
used. This ensures that code doesn't break with a compilation error
simply because people forgot to type-check a given env.
Note that compilation could still fail because the env module
itself could provide an invalid API. So it only prevents each
modules to be independently wrong when taken in isolation.
2. Technically, this also means that one can import env modules in
other env modules by their names. I don't know if it's a good or
bad idea at this point but it doesn't really do any wrong;
dependencies and cycles are handlded all-the-same.
Using 'pallas' as a dependency brings utxo-rpc other annoying dependencies such as _tokyo_. This not only makes the overall build longer, but it also prevents it to even work when targetting wasm.
- Doesn't allow pattern-matching on G1/G2 elements and strings,
because the use cases for those is unclear and it adds complexity to
the feature.
- We still _parse_ patterns on G1/G2 elements and strings, but emit an
error in those cases.
- The syntax is the same as for bytearray literals (i.e. supports hex,
utf-8 strings or plain arrays of bytes).
There are currently two zero-arg builtins:
- mkNilData
- mkNilPairData
And while they have strictly speaking no arguments, the VM still
requires that they are called with an extra unit argument applied.
While this builtin is readily available through the Aiken syntax
`[head, ..tail]`, there's no reason to not support its builtin form
even though we may not encourage its usage. For completeness and to
avoid bad surprises, it is now supported.
Fixes#964.