This is likely even better than what was done for property testing. We
shall revise that one perhaps one day.
Signed-off-by: KtorZ <5680256+KtorZ@users.noreply.github.com>
Going for a terminal plot, for now, as this was the original idea and it is immediately visual. All benchmark points can also be obtained as JSON when redirecting the output, like for tests. So all-in-all, we provide a flexible output which should be useful. Whether it is the best we can do, time (and people/users) will tell.
Signed-off-by: KtorZ <5680256+KtorZ@users.noreply.github.com>
The idea is to get a good sample of measures from running benchmarks
with various sizes, so one can get an idea of how well a function
performs at various sizes.
Given that size can be made arbitrarily large, and that we currently
report all benchmarks, I installed a fibonacci heuristic to gather
data points from 0 to the max size using an increasing stepping.
Defined as a trait as I already anticipate we might need different
sizing strategy, likely driven by the user via a command-line option;
but for now, this will do.
Signed-off-by: KtorZ <5680256+KtorZ@users.noreply.github.com>
This commit removes some duplication between bench and test runners,
as well as fixing the results coming out of running benchmarks.
Running benchmarks is expected to yield multiple measures, for each of
the iteration. For now, it'll suffice to show results for each size;
but eventually, we'll possibly try to interpolate results with
different curves and pick the best candidate.
Signed-off-by: KtorZ <5680256+KtorZ@users.noreply.github.com>
* Fix: Deeply nested assignments would offset the new columns count calculation. Now we track relevant columns and their path to ensure each row has wildcards if they don't contain the relevant column
* Add test plus clippy fix
* Clippy fix
* New version clippy fix
Avoid the interface to hang for several seconds without feedback when counterexamples are being simplified. This sends a heads-up to the user to indicate that a research of a counter example is going on.
The playground doesn't / cannot depend on aiken-project because that becomes a gigantic pain. So instead, we try to keep essential stuff inside aiken-lang when possible.