This is likely even better than what was done for property testing. We
shall revise that one perhaps one day.
Signed-off-by: KtorZ <5680256+KtorZ@users.noreply.github.com>
Going for a terminal plot, for now, as this was the original idea and it is immediately visual. All benchmark points can also be obtained as JSON when redirecting the output, like for tests. So all-in-all, we provide a flexible output which should be useful. Whether it is the best we can do, time (and people/users) will tell.
Signed-off-by: KtorZ <5680256+KtorZ@users.noreply.github.com>
The idea is to get a good sample of measures from running benchmarks
with various sizes, so one can get an idea of how well a function
performs at various sizes.
Given that size can be made arbitrarily large, and that we currently
report all benchmarks, I installed a fibonacci heuristic to gather
data points from 0 to the max size using an increasing stepping.
Defined as a trait as I already anticipate we might need different
sizing strategy, likely driven by the user via a command-line option;
but for now, this will do.
Signed-off-by: KtorZ <5680256+KtorZ@users.noreply.github.com>
This commit removes some duplication between bench and test runners,
as well as fixing the results coming out of running benchmarks.
Running benchmarks is expected to yield multiple measures, for each of
the iteration. For now, it'll suffice to show results for each size;
but eventually, we'll possibly try to interpolate results with
different curves and pick the best candidate.
Signed-off-by: KtorZ <5680256+KtorZ@users.noreply.github.com>
Fixes:
- Do not allow bench with no arguments; this causes a compiler panic
down the line otherwise.
- Do not force the return value to be a boolean or void. We do not
actually control what's returned by benchmark, so anything really
works here.
Refactor:
- Re-use code between test and bench type-checking; especially the
bits related to gathering information about the via arguments.
There's quite a lot and simply copy-pasting everything will likely
cause issues and discrepency at the first change.
Signed-off-by: KtorZ <5680256+KtorZ@users.noreply.github.com>
This allows conditions like ```expect x == 1``` to match performance with ```x == 1 && ...```
Also change builtins forcing to accommodate the new case-constr apply optimization