This is crucial as some checks regarding variable usages depends on
warnings; so we may accidentally remove variables from the AST as a
consequence of backtracking for deep inferrence.
The current inferrence system walks expressions from "top to bottom".
Starting from definitions higher in the source file, and down. When a
call is encountered, we use the information known for the callee
definition we have at the moment it is inferred.
This causes interesting issues in the case where the callee doesn't
have annotations and in only partially known. For example:
```
pub fn list(fuzzer: Option<a>) -> Option<List<a>> {
inner(fuzzer, [])
}
fn inner(fuzzer, xs) -> Option<List<b>> {
when fuzzer is {
None -> Some(xs)
Some(x) -> Some([x, ..xs])
}
}
```
In this small program, we infer `list` first and run into `inner`.
Yet, the arguments for `inner` are not annotated, so since we haven't
inferred `inner` yet, we will create two unbound variables.
And naturally, we will link the type of `[]` to being of the same type
as `xs` -- which is still unbound at this point. The return type of
`inner` is given by the annotation, so all-in-all, the unification
will work without ever having to commit to a type of `[]`.
It is only later, when `inner` is inferred, that we will generalise
the unbound type of `xs` to a generic which the same as `b` in the
annotation. At this point, `[]` is also typed with this same generic,
which has a different id than `a` in `list` since it comes from
another type definition.
This is unfortunate and will cause issues down the line for the code
generation. The problem doesn't occur when `inner`'s arguments are
properly annotated or, when `inner` is actually inferred first.
Hence, I saw two possible avenues for fixing this problem:
1. Detect the presence of 'uncongruous generics' in definitions after
they've all been inferred, and raise a user error asking for more
annotations.
2. Infer definitions in dependency order, with definitions used in
other inferred first.
This commit does (2) (although it may still be a good idea to do (1)
eventually) since it offers a much better user experience. One way to
do (2) is to construct a dependency graph between function calls, and
ensure perform a topological sort.
Building such graph is, however, quite tricky as it requires walking
through the AST while maintaining scope etc. which is more-or-less
already what the inferrence step is doing; so it feels like double
work.
Thus instead, this commit tries to do a deep-first inferrence and
"pause" inferrence of definitions when encountering a call to fully
infer the callee first. To achieve this properly, we must ensure that
we do not infer the same definition again, so we "remember" already
inferred definitions in the environment now.
Until now, we would pretty-print unbound variable the same way we would pretty-print generics. This turned out to be very confusing when debugging, as they have a quite different semantic and it helps to visualize unbound types in definitions.
This was somehow wrong and corrected by codegen later on, but we should be re-using the same generic id across an entire definition if the variable refers to the same element.
This should not happen; if it does, it's an error from the type-checker. So instead of silently swallowing the error and adopting a behavior which is only _sometimes_ right, it is better to fail loudly and investigate.
Refactor get_uplc_type to account for constr types that don't exactly resolve to a uplc type
Check arg_stack in uplc generator has only 1 argument at the end of the generation
warning fixes
Temporarily using the 'specialize-dict-key' branch from the stdlib
which makes use of Pair where relevant. Once this is merged back into
'main' we should update the acceptance test toml files to keep getting
them automatically upgraded.
This commit also fixes an oversight in the reification of data-types
now properly distinguishing between pairs and 2-tuples.
Co-authored-by: Microproofs <kasey.white@cardanofoundation.org>
Before this commit, we would parse 'Pair' as a user-defined
data-types, and thus piggybacking on that whole record system. While
perhaps handy for some things, it's also semantically wrong and
induces a lot more complexity in codegen which now needs to
systematically distinguish every data-type access between pairs, and
others.
So it's better to have it as a separate expression, and handle it
similar to tuples (since it's fundamentally a 2-tuple with a special
serialization).
And a few more tests along the way for others. Note that it is important here that we try to parse for a 'Pair' BEFORE we try to parse for a constructor pattern. Because the latter would swallow any Pair pattern.