This commit is contained in:
waalge 2025-02-16 19:05:17 +00:00
parent 194492234e
commit 62b4aa0523
40 changed files with 811 additions and 374 deletions

2
.gitignore vendored
View File

@ -1,6 +1,6 @@
.direnv/
_site
docs/
_cache
dist

View File

@ -2,5 +2,4 @@
## Commands
Enter devshell, and run `menu`
See flake for details.
Enter devshell, and run `menu` See flake for details.

108
assets/css/main.css Normal file
View File

@ -0,0 +1,108 @@
@tailwind base;
@tailwind components;
@tailwind utilities;
@font-face {
/* Set in tailwindconfig */
font-family: "jetbrains-mono";
src:
local("jetbrains-mono"),
url("/fonts/JetBrainsMono-Medium.woff2") format("woff2");
}
article {
margin-bottom: 2rem;
}
article > section > :is(pre, p, h1, h2, h3, h4, h5, h6) {
margin-top: 2rem;
}
article > section {
font-family:
"Lucida" Grande,
sans-serif;
}
article > section > :is(h1, h2, h3, h4, h5, h6, code) {
font-family: "jetbrains-mono";
}
article > section > blockquote {
padding: 1rem;
border-left-width: 4px;
border-color: rgb(239 68 68);
font-style: italic;
}
article > section > h1 {
margin-top: 2rem;
font-size: 3rem;
}
article > section > h1::before {
content: "# ";
}
article > section > h2 {
font-size: 2rem;
}
article > section > h2::before {
content: "## ";
}
article > section > h3 {
font-size: 1.5rem;
}
article > section > h3::before {
content: "### ";
}
article > section > h4 {
font-size: 1.3rem;
}
article > section > h4::before {
content: "#### ";
}
article > section {
margin-top: 4rem;
}
article a {
text-decoration-color: rgb(239 68 68);
text-decoration-thickness: 4px;
text-decoration-line: underline;
transition-duration: 70ms;
}
article a:hover {
text-decoration-thickness: 8px;
text-decoration-color: rgb(185 28 28);
text-decoration-line: underline;
}
article ul {
margin-left: 1rem;
list-style-type: "- ";
}
article ol {
margin-left: 1rem;
list-style: decimal inside;
}
#footnotes {
padding-top: 1rem;
}
#footnotes > ol > li {
margin-top: 1rem;
}
#footnotes > ol > li > p {
display: inline;
}

1
assets/css/mini.css Normal file

File diff suppressed because one or more lines are too long

View File

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 10 KiB

View File

Before

Width:  |  Height:  |  Size: 5.5 KiB

After

Width:  |  Height:  |  Size: 5.5 KiB

View File

Before

Width:  |  Height:  |  Size: 1.5 KiB

After

Width:  |  Height:  |  Size: 1.5 KiB

View File

Before

Width:  |  Height:  |  Size: 672 B

After

Width:  |  Height:  |  Size: 672 B

View File

Before

Width:  |  Height:  |  Size: 1.0 KiB

After

Width:  |  Height:  |  Size: 1.0 KiB

View File

@ -361,6 +361,9 @@ video {
border-width: 0;
white-space: nowrap;
}
.static {
position: static;
}
.absolute {
position: absolute;
}
@ -931,10 +934,6 @@ article ol {
}
}
@media (min-width: 768px) {
.md\:mx-24 {
margin-left: 6rem;
margin-right: 6rem;
}
.md\:gap-8 {
gap: 2rem;
}

View File

@ -4,72 +4,73 @@ date: 2023-08-07
---
Not so long ago Emurgo announced they were doing a Cardano centered hackathon.
It was a welcome prospect - very few similar such events seem to exist in the space.
Things went monotonically south ever since the announcement, but that's a different story.
It was a welcome prospect - very few similar such events seem to exist in the
space. Things went monotonically south ever since the announcement, but that's a
different story.
One particularly interesting quirk was that of the three "tracks" of the hackathon,
one was _Zero Knowledge_ (aka zk).
Why particularly interesting quirk? In some sense it is not surprising:
zk has been very trendy these last few years around blockchains.
However, building on Cardano is notoriously challenging.
Building with zk on a zk-native blockchain is itself a very steep learning curve.
So combining the two, zk on Cardano seemed... a bit mad.
One particularly interesting quirk was that of the three "tracks" of the
hackathon, one was _Zero Knowledge_ (aka zk). Why particularly interesting
quirk? In some sense it is not surprising: zk has been very trendy these last
few years around blockchains. However, building on Cardano is notoriously
challenging. Building with zk on a zk-native blockchain is itself a very steep
learning curve. So combining the two, zk on Cardano seemed... a bit mad.
This post is borne out of a best effort of how far "zk on Cardano" can be pushed.
This post is borne out of a best effort of how far "zk on Cardano" can be
pushed.
## What is zk?
There is no shortage of explanations describing what zk is
( _eg_ [by Vitalik](https://vitalik.ca/general/2021/01/26/snarks.html){target="\_blank"} or
[a full mooc](https://zk-learning.org/){target="\_blank"} ).
There is also a reasonable breath to the field of zk that includes things like distributed compute.
Zk involves some really neat maths that lets you do some seemingly magical feats
and pairs well with blockchain in extending what is functionally possible.
Let's stick to a simple and prototypical example.
There is no shortage of explanations describing what zk is ( _eg_
[by Vitalik](https://vitalik.ca/general/2021/01/26/snarks.html){target="\_blank"}
or [a full mooc](https://zk-learning.org/){target="\_blank"} ). There is also a
reasonable breath to the field of zk that includes things like distributed
compute. Zk involves some really neat maths that lets you do some seemingly
magical feats and pairs well with blockchain in extending what is functionally
possible. Let's stick to a simple and prototypical example.
Suppose Alice and Bob are playing battleships.
The game begins with Alice and Bob placing their ships within their own coordinate grid.
They then take turns picking coordinates to "strike".
If they hit nothing then their turn ends, but if they hit a ship then they strike again.
The winner is the first to strike all coordinates containing their opponent's ships.
Suppose Alice and Bob are playing battleships. The game begins with Alice and
Bob placing their ships within their own coordinate grid. They then take turns
picking coordinates to "strike". If they hit nothing then their turn ends, but
if they hit a ship then they strike again. The winner is the first to strike all
coordinates containing their opponent's ships.
Alice knows Bob as being a notorious liar; how can she enjoy the game?
Each guess she makes, Bob gleefully shouts "Miss!".
She can't ask Bob to show he's not lying by revealing the actual locations of the ships.
She could ask Charlie to independently verify Bob's not lying,
but then what if Charlie is actually on team Bob and also lies.
Or Bob might suspect Charlie is actually on team Alice, slyly brought in to give Alice some hints.
Each guess she makes, Bob gleefully shouts "Miss!". She can't ask Bob to show
he's not lying by revealing the actual locations of the ships. She could ask
Charlie to independently verify Bob's not lying, but then what if Charlie is
actually on team Bob and also lies. Or Bob might suspect Charlie is actually on
team Alice, slyly brought in to give Alice some hints.
Is there a way that Bob can prove to Alice that each guess is a miss,
but without revealing the locations of the ships either to Alice or anyone else?
Is there a way that Bob can prove to Alice that each guess is a miss, but
without revealing the locations of the ships either to Alice or anyone else?
The answer is yes.
Using zk Bob can produce a proof each time Alice's guess misses if and only if it honestly does.
Alice can inspect each proof and verify Bob's response.
Alice can interrogate the proof as much as she wants, but she won't learn anything more than
her guess was a miss.
The answer is yes. Using zk Bob can produce a proof each time Alice's guess
misses if and only if it honestly does. Alice can inspect each proof and verify
Bob's response. Alice can interrogate the proof as much as she wants, but she
won't learn anything more than her guess was a miss.
There are a multitude of different ways to do this,
but essentially it involves modeling the problem as a bunch of algebra
over finite fields - like a lot of cryptography.
There are a multitude of different ways to do this, but essentially it involves
modeling the problem as a bunch of algebra over finite fields - like a lot of
cryptography.
What's the _snark_ of zk-snark?
Snark stands for _Succinct Non-Interactive Argument of Knowledge_.
And without saying anything more, it means that Alice has to do way less algebra than Bob.
In applications this is important because Bob might not be able to lie anymore but he could still waste Alice's time.
What's the _snark_ of zk-snark? Snark stands for _Succinct Non-Interactive
Argument of Knowledge_. And without saying anything more, it means that Alice
has to do way less algebra than Bob. In applications this is important because
Bob might not be able to lie anymore but he could still waste Alice's time.
## Sudoku snark
Sudoku snark was the entrant to Emurgo's hackathon.
The summary-pitch-story deck is [here](https://pub.kompact.io/sudoku-snark){target="\_blank"}.
Links to the associated repos: [plutus-zk](https://github.com/waalge/plutus-zk){target="\_blank"}
and [sudoku-snark](https://github.com/waalge/sudoku-snark){target="\_blank"}.
Sudoku snark was the entrant to Emurgo's hackathon. The summary-pitch-story deck
is [here](https://pub.kompact.io/sudoku-snark){target="\_blank"}. Links to the
associated repos:
[plutus-zk](https://github.com/waalge/plutus-zk){target="\_blank"} and
[sudoku-snark](https://github.com/waalge/sudoku-snark){target="\_blank"}.
Just after the hackathon got underway there was a
[large PR merged](https://github.com/input-output-hk/plutus/pull/5231){target="\_blank"}
into the main branch of plutus.
It's a mammoth culmination of many many months of work.
In it were some fundamental primitives needed for running zk algorithms.
into the main branch of plutus. It's a mammoth culmination of many many months
of work. In it were some fundamental primitives needed for running zk
algorithms.
The idea of the project was as follows:
@ -79,48 +80,54 @@ The idea of the project was as follows:
- wrap up in a gui
Unsurprisingly to anyone who's hung around the Cardano ecosystem long enough,
this third part is where things got stuck.
We did get as far as running a cluster of nodes in the Conway era with the latest version of plutus
but unrelated changes seemed to thwart any chance of building transactions here.
this third part is where things got stuck. We did get as far as running a
cluster of nodes in the Conway era with the latest version of plutus but
unrelated changes seemed to thwart any chance of building transactions here.
A quick shout-out to the [modulo-p.io](https://modulo-p.io/){target="\_blank"} team.
They had a different approach and managed to implement a zk algorithm with the existing plutus primitives.
This spared the need to play the foolhardy dependency bumping game with the Cardano node.
However, because zk is so arithmetically intense,
the app wont run outside a hydra head and with very generous max unit budgets (afaics).
This approach won't be necessary when we have the new version of plutus available.
Nonetheless, it's very neat to see it done and they packaged it very nicely.
A quick shout-out to the [modulo-p.io](https://modulo-p.io/){target="\_blank"}
team. They had a different approach and managed to implement a zk algorithm with
the existing plutus primitives. This spared the need to play the foolhardy
dependency bumping game with the Cardano node. However, because zk is so
arithmetically intense, the app wont run outside a hydra head and with very
generous max unit budgets (afaics). This approach won't be necessary when we
have the new version of plutus available. Nonetheless, it's very neat to see it
done and they packaged it very nicely.
The validator in Sudoku snark uses [groth16](https://eprint.iacr.org/2016/260.pdf).
In part because this was already mostly available from the plutus repo itself.
It is also the most obvious candidate to begin with.
It's relatively mature, relatively simple, can be implemented from the new primitives,
and importantly in Cardano land has small proof size.
(As far as I know, the smallest of comparable algorithms.)
The validator in Sudoku snark uses
[groth16](https://eprint.iacr.org/2016/260.pdf). In part because this was
already mostly available from the plutus repo itself. It is also the most
obvious candidate to begin with. It's relatively mature, relatively simple, can
be implemented from the new primitives, and importantly in Cardano land has
small proof size. (As far as I know, the smallest of comparable algorithms.)
The program to generate the setup and proofs uses the Arkworks framework.
Again this choice was initially inspired by a script from the IOG team,
but again it seems like a smart choice.
Arkworks is a well conceived, highly modular framework for zk,
which makes it easy to pull in the bits we need to perform our off-chain logic.
The program to generate the setup and proofs uses the Arkworks framework. Again
this choice was initially inspired by a script from the IOG team, but again it
seems like a smart choice. Arkworks is a well conceived, highly modular
framework for zk, which makes it easy to pull in the bits we need to perform our
off-chain logic.
The choice of game, sudoku, was in turn inspired by an arkworks example.
It's not the most compelling of choices, but it's simple and it did for now.
Battleships would have been more compelling or mastermind as the modulo-p team used.
The choice of game, sudoku, was in turn inspired by an arkworks example. It's
not the most compelling of choices, but it's simple and it did for now.
Battleships would have been more compelling or mastermind as the modulo-p team
used.
The intended game play involved locking Ada at a utxo correspondinig to a sudoku puzzle,
and spendable only if a player could provide proof they knew the solution.
Through the magic of zk they'd not disclose to the other competitors the solution itself.
Other details were TBC: is it first and second prizes? are players whitelisted? _etc_.
The intended game play involved locking Ada at a utxo correspondinig to a sudoku
puzzle, and spendable only if a player could provide proof they knew the
solution. Through the magic of zk they'd not disclose to the other competitors
the solution itself. Other details were TBC: is it first and second prizes? are
players whitelisted? _etc_.
## So are we zk-Cardano yet?
We're close.
There is potentially still quite a while before these new primitives in plutus reach mainnet.
The word on the street is that it might happen before the end of 2023.
There is potentially still quite a while before these new primitives in plutus
reach mainnet. The word on the street is that it might happen before the end
of 2023.
Even sooner, there will be versions of the Cardano node available with the new primitives,
and so possibly plumb-able into hydra without causing oneself an aneurysm.
Even sooner, there will be versions of the Cardano node available with the new
primitives, and so possibly plumb-able into hydra without causing oneself an
aneurysm.
In development time that's not so long: we can start thinking about what to build with zk on Cardano.
In development time that's not so long: we can start thinking about what to
build with zk on Cardano.

View File

@ -5,100 +5,99 @@ date: 2023-09-20
## Hydra is cool
Hydra[^1] is a very cool project. It is a layer 2 for Cardano that is _isomorphic_ to the L1.
Here isomorphic means that Plutus runs in Hydra just like it does on the L1.
That dapp you've just toiled over for months to run on the L1 can be put in Hydra and 'just work'.
Hydra[^1] is a very cool project. It is a layer 2 for Cardano that is
_isomorphic_ to the L1. Here isomorphic means that Plutus runs in Hydra just
like it does on the L1. That dapp you've just toiled over for months to run on
the L1 can be put in Hydra and 'just work'.
[^1]:
This post does not distinguish between Hydra and Hydra Head referring to both as Hydra.
If you want to know more about Hydra, then check out their
This post does not distinguish between Hydra and Hydra Head referring to
both as Hydra. If you want to know more about Hydra, then check out their
[explainers](https://hydra.family/head-protocol/core-concepts).
## Hydra's compromise
Hydra boasts it can achieve higher throughput and lower transaction fees compared to the Cardano L1
as well as near instant settling and no roll-backs.
You may be asking _If my dapp just works on Hydra and it's better in all key respects,
then why don't we all just use Hydra?_.
The answer is because these improvements come at a cost.
Consensus in Hydra differs from that on the L1.
Hydra doesn't use ouroboros. Instead all participating hydra nodes
must sign-off on all updates to the chain state.
Practically speaking, far fewer nodes can participate in Hydra
and one quiet node stops the whole Hydra chain updating.
Hydra boasts it can achieve higher throughput and lower transaction fees
compared to the Cardano L1 as well as near instant settling and no roll-backs.
You may be asking _If my dapp just works on Hydra and it's better in all key
respects, then why don't we all just use Hydra?_. The answer is because these
improvements come at a cost. Consensus in Hydra differs from that on the L1.
Hydra doesn't use ouroboros. Instead all participating hydra nodes must sign-off
on all updates to the chain state. Practically speaking, far fewer nodes can
participate in Hydra and one quiet node stops the whole Hydra chain updating.
Not great for an L1.
## You don't need Hydra
Hydra is an example of a way to do state channels.
A state channel relies on the integrity of the L1, while accumulating state separately from it (L2).
At some point the layers are brought into sync.
This is when funds on the L1 can be unlocked, and/or the state of the L2 updated.
Hydra is an example of a way to do state channels. A state channel relies on the
integrity of the L1, while accumulating state separately from it (L2). At some
point the layers are brought into sync. This is when funds on the L1 can be
unlocked, and/or the state of the L2 updated.
Hydra could be thought to be providing some future-proofing.
It is possible for a Hydra instance to run indefinitely
and Plutus scripts not yet written will be executable in some already running instance.
However, because Hydra's consensus is so brittle the longevity of an instance is not something to depend on.
Each and any transaction may be its last.
Hydra could be thought to be providing some future-proofing. It is possible for
a Hydra instance to run indefinitely and Plutus scripts not yet written will be
executable in some already running instance. However, because Hydra's consensus
is so brittle the longevity of an instance is not something to depend on. Each
and any transaction may be its last.
A key question when considering Hydra is _Do I need isomorphic-ness?_.
If you know all your business logic before instantiation
then the answer is **no, you don't care for isomorphic-ness**.
Instead, you can roll-your-own L2.
It depends on your use case as to how much work that ends up being.
It can be very simple.
A key question when considering Hydra is _Do I need isomorphic-ness?_. If you
know all your business logic before instantiation then the answer is **no, you
don't care for isomorphic-ness**. Instead, you can roll-your-own L2. It depends
on your use case as to how much work that ends up being. It can be very simple.
## You don't want Hydra
In Hydra, the latest agreed state in the L2 is the one that the L1 will accept as the most legitimate.
This is a sensible default.
In Hydra, the latest agreed state in the L2 is the one that the L1 will accept
as the most legitimate. This is a sensible default.
Suppose however you have a game of poker where one player learns that they've lost and rage quits.
From the game's perspective, that final transaction should be forced through - the player's loss is inevitable.
At present this isn't possible with Hydra.
If a party doesn't sign, then a state isn't valid.
Suppose however you have a game of poker where one player learns that they've
lost and rage quits. From the game's perspective, that final transaction should
be forced through - the player's loss is inevitable. At present this isn't
possible with Hydra. If a party doesn't sign, then a state isn't valid.
In another use case, suppose there is some particularly intense on-chain verification
that would be prohibitive on the L1 but that you'd like the results of which to
persist onto the L1 and/or be recovered in future L2 instances.
This could be done with validity tokens but anything minted in the L2 won't persist onto the L1.
In another use case, suppose there is some particularly intense on-chain
verification that would be prohibitive on the L1 but that you'd like the results
of which to persist onto the L1 and/or be recovered in future L2 instances. This
could be done with validity tokens but anything minted in the L2 won't persist
onto the L1.
Another key question then is _What is the right way to sync the L1 and L2 states?_.
Hydra has a way of it doing it which might or might not be appropriate for your use case.
Rolling your own L2 means that the sync logic can fit your business needs.
Both the cases above are resolvable with custom sync logic.
Another key question then is _What is the right way to sync the L1 and L2
states?_. Hydra has a way of it doing it which might or might not be appropriate
for your use case. Rolling your own L2 means that the sync logic can fit your
business needs. Both the cases above are resolvable with custom sync logic.
## An Example: Subbit.xyz
Probably the simplest, non-trivial example using state channels is [Subbit.xyz](https://subbit.xyz).
Subbit.xyz is premised on the observation that subscription is a very common use case:
there are two parties where one pays the other incrementally.
It sacrifices generality to gain absolutely minimal overhead for both parties.
Probably the simplest, non-trivial example using state channels is
[Subbit.xyz](https://subbit.xyz). Subbit.xyz is premised on the observation that
subscription is a very common use case: there are two parties where one pays the
other incrementally. It sacrifices generality to gain absolutely minimal
overhead for both parties.
In Subbit.xyz, Alice, a consumer, subscribes to some service of Bob, a provider.
Alice instantiates the channel by locking funds, similar to Hydra.
There are only two mechanisms for unlocking - one for Alice and the other for Bob.
All logic is known at instantiation.
Alice instantiates the channel by locking funds, similar to Hydra. There are
only two mechanisms for unlocking - one for Alice and the other for Bob. All
logic is known at instantiation.
A consumer needs only to keep track of their account balance,
ascertain the cost of each outgoing request,
and produce valid signatures for a few dozen bytes of data at a time.
They don't need to watch the L1 and it's a non-chatty protocol.
The low resource needs opens it up to applications
on intermittently connected user devices such as laptops and mobile,
and even micro-controllers.
High throughput remains achievable.
A consumer needs only to keep track of their account balance, ascertain the cost
of each outgoing request, and produce valid signatures for a few dozen bytes of
data at a time. They don't need to watch the L1 and it's a non-chatty protocol.
The low resource needs opens it up to applications on intermittently connected
user devices such as laptops and mobile, and even micro-controllers. High
throughput remains achievable.
A provider must track each subscriber's account, and periodically check the state of the L1.
This could conceivably be as little as once a week or once a month.
The low resource needs for a provider means they have the ability to serve more with less.
A provider must track each subscriber's account, and periodically check the
state of the L1. This could conceivably be as little as once a week or once a
month. The low resource needs for a provider means they have the ability to
serve more with less.
## Hydra for QoL
When Hydra reaches a point of maturity that it's plug and play,
it's potentially far easier to deploy with Hydra then roll-your-own L2.
Isomorphic-ness gives Hydra incredible flexibility and generality.
You don't need isomorphic-ness but because of it, Hydra could be an easy and convenient solution.
When Hydra reaches a point of maturity that it's plug and play, it's potentially
far easier to deploy with Hydra then roll-your-own L2. Isomorphic-ness gives
Hydra incredible flexibility and generality. You don't need isomorphic-ness but
because of it, Hydra could be an easy and convenient solution.
As for custom sync logic, it is surely the case that there is a tranche of interesting applications where
it's far easier and more effective to reuse Hydra infra and modify it than creating your own L2 from scratch.
As for custom sync logic, it is surely the case that there is a tranche of
interesting applications where it's far easier and more effective to reuse Hydra
infra and modify it than creating your own L2 from scratch.

272
content/posts/principles.md Normal file
View File

@ -0,0 +1,272 @@
---
title: "Principles of dapp design"
date: 2024-11-26
status: published
---
There are a collection of disparate thoughts on what makes for good design that
inform the decisions we make when designing dapps.
We thought it helpful to articulate them for ourselves, for any future
collaborators, and any wider audience interested.
These thoughts are expressed as diktats but are intended as perspective
from which to pick through there shortcomings.
These "principles" don't fit neatly into some SOLID like framework. Some are
high-level general security concious software dev stuff, others are quite
specifically reflecting on a "building an L2 on cardano". Some implications
manifest at the software architecture level, while others inform the development
process.
## Context
Before unpacking the principles, it's worth reflecting on _devving in
Cardanoland_.
The Cardano ledger is a lean by design: it is to provide a kernel of integrity
for larger applications. It was never to, say, host and execute all the
components of an application. It is even a little too lean in places (eg using
key hashes over keys in the script context).
On-chain code is the code executed in the ledger. All integrity guarantees of an
application built over ledger ultimately rely on the on-chain code. On-chain
code is purely discriminatory: show it a transaction and it will succeed or
fail. It cannot generate valid transactions. That is the responsibility for
other, off-chain code. Off-chain code is a necessary part of any such
application, but does not provide integrity guarantees.
We'll call applications, structured such that their integrity is backed by the
ledger, "dapps". Typically the term has other assumptions: a decentralised
application should be run-able by anyone with internet access, reasonable
hardware, and without need to seek permission from some authority.
On-chain code takes form as Plutus validators. Writing validators as
**extreme programming**.
Lets immediately disambiguate with the [XP methedology][xp-wiki].
It's extreme in the sense that it is highly unusual.
[xp-wiki]: https://en.wikipedia.org/wiki/Extreme_programming
It's extreme in the following ways:
- Highly constrained execution environment, which has multiple implications
- Diminishing returns of code re-use
- Conflicting motivation on factorization
- Un-patch-ablity
- High cost of failure
- Functions that do nothing but fail
Resource limitations at runtime are incredibly restrictive. Moreover, all
resources used have to be paid for. This is a concern is shared to some extent
with low level programming but even there is relatively niche on modern
hardware. There are many implications of this. A key one is that implementation
matter.
Libraries have limited use. As noted, the efficiency of an implementation
matters. If one needs to aggregate multiple values over a list of inputs, it is
generally cheaper to do this in a single fold, and without
constructing/deconstructing across a function boundary. Implementation details
cannot be abstracted away in simple one-size-fits-all manner, at least at zero
cost (cf plutus apps and the bloated outputs it would return). In real world
examples this cost is significant enough, that the use of stock library methods
must be considered. One saving grace is that this is manageable. The resource
limitations mean that anything over a couple thousand lines of code risks not
being usable in a transaction anyway.
Factorizing code in way that communicates purpose to a reader should be a strong
consideration. However, we already have another, possibly conflicting,
consideration: implementation efficiency. Some might say this is a compiler
problem, and that we should have clever compilers. To which the most immediate
answer is "yes, but right now we don't". It is also not a full panacea. A clever
compiler is harder to reason about, harder to verify its correctness, and it can
become more obscure to prod the compiler into pathways known to be more
efficient.
Validators are, a priori, not patchable. Depending on the validator, once
'deployed' it may be impossible to update. There is no way to bump or patch a
validator once it is in use, without such functionality being designed in. This
not unique. It was standard in pre-internet software where rolling updates were
infeasible, and still exists for devices that aren't internet enabled. However,
it is now far more the niche than the mainstream.
The correctness of a validator is high stakes. This is a property shared with
any security critical software. It is not the same league as, say, aviation
software, but it is much closer to that than a game or web app. If there is a
bug, then user funds are at risk. This compounds being not patchable. Great
efforts must be spent verifying validator correctness.
Validators are very unusual functions particularly in a functional paradigm.
They take as input a very restricted set of args, and either fail or return
unit. That's all we care about: discriminating acceptable transactions from
unacceptable transactions. Sure, this akin to writing a test, or an assert
condition - but these are commonly auxiliary rather than the culmination of the
code base. Writing Plutus is not akin to, say, some web based API or an ETL
pipeline. There is the potential for the code to be utilized by third parties if
they desired to build their own transactions that involve the validators.
However this use is generally secondary to optimizing for the intended set of
transaction builders.
## Principles
### On-chain code is to keep participants safe from others
On-chain code **is** responsible for keeping the user safe from others. It is
its fundamental responsibility.
On-chain code is **not** responsible for keeping the user safe from themselves.
A user can compromise themselves completely and totally by, say, publishing their signing key,
voiding any guarantees provided by on-chain code.
Thus such guarantees are generally superfluous.
Off-chain code is responsible for keeping the user safe, and it is off-chain
code that should make it hard for them to shoot themselves in the foot.
On-chain code is also not there to facilitate politeness. Good and bad behaviour
is a binary distinction, discriminated thoroughly by a validator. A partner may
stop responding for legitimate or malicious reasons. We do not need to
speculate; we need only ensure that the other partner's funds are safe and can
be recovered.
Suppose there is a dapp that requires the depositing of funds, with a pathway in which the funds
may be returned. Further, that this return is verified by a signature belonging
associated to key provided by the depositor.
Some may wish on-chain code to check everything it is possible to check.
This includes keeping the user safe from themself.
From this perspective the validator should
have a constraint that this key has signed the tx, verifying that the depositor
has the associated signing key.
According to this principle, the validator should not.
The larger code base should.
The documentation should be crystal clear what this key is for and how it should be managed.
But the on-chain code should be solely focused on keeping the user safe from others, and other from the user.
There is a loose sense of the [streetlight effect](https://en.wikipedia.org/wiki/Streetlight_effect) bourne out in code.
### Simplicity invites security
Particularly for on-chain code, err on the side of simplicity. Design and build
such that reasoning around scenarios is straightforward, and answering "what if
..." questions are easy.
This should not be at the expense of being feature complete, although the
principle then becomes a little grey on application. In places it translates to:
develop an abstraction in which the features become simple. In other places, we
will have to find the happy compromise between simple-ness and feature-ful-ness.
For example, in the case of designing [Cardano Lightning](cardano-lightning.org)
can a partner close two of their channels in a single transaction?
This might be important but perhaps could be handled by an abstraction in the application
hiding this detail to the user.
There is overlap here with the previous principle.
By having the on-chain code laser focused, we don't have nice-to-haves,
cluttering the code base, possibly obscuring our view from otherwise glaring bug.
The current principle further justifies excluding logic in the validator
that a self interested participant would be motivated from ensuring themselves.
For example: a validator must check that a thread token never leaves a script address;
it need not check that a participant has paid themselves their due funds.
### Prioritise user autonomy
Deprioritise collaborative actions.
This principle is only employable only in specific circumstances.
In Cardano Lightning a user is responsible for their own funds, and only their
own funds. They cannot spend their partners funds. For example, when winding
down a channel, it requires each user to submit evidence of what they are owed.
The pathway could have enforced that one partner left an address to which the
other partner's tx would output the correct funds. It would potentially save a
transaction in the winding down process. However, it also invites questions of
double satisfaction, and resolutions to this make it harder to reason about.
Instead, following this principle,
the design prioritises multi-channel management.
A fundamental participant type in a healthy Lightning network is the Gateway.
A gateway participant is highly
connected within the network and is (financially) motivated to route payments
between participants. A gateway node in particular needs to manage their
channels, and manage their capital amongst channels as efficiently as possible.
This in the whole network's interest.
Cardano Lightning does permit mutual agreed actions.
However, such actions is considered as a secondary
pathway. Any channel has (at most) the funds of only the two partners of the
channel. Mutual actions are verified by the presence of both partners'
signatures on a transaction. As we assume that any participant will act in a
self interested manner, and is responsible for keeping themselves safe, few
checks are done beyond this.
### Distinguish safety from convenience
Safety comes first, but we also need things to be practical and preferably even
convenient. Make explicit when a feature has been included for safety, or is to
do with convenience.
In some respects, this is restating of the first and or second principles.
The on-chain code is precisely what keeps users safe.
The small distinction is that this extends to off-code too.
There are aspects of off-chain code that are integral, and other parts that are convenient.
### Spec first, implement second
A spec:
- Bridges from intent to code
- Expels ambiguity early
- Says no to feature creep
A spec bridges the design intentions to the implementation. Code is halfway
house between what a human can parse and what a machine can parse. Where that
halfway falls is a question for language design(ers), and there is a full
spectrum of positions available with all the possible languages. Very roughly,
the closer it is to human readable, the less efficient it ends up being executed
by the machine.
Regardless of a language's expressiveness, it doesn't replace the need for
additional documentation.
"Self-describing code" is a nice idea and not without strong merits. Naming
should follow conventions, and should not be knowingly obscure. In software at
large, the problem of separate documentation falling out of sync is observed ad
nauseum. But. The idea that descriptive names and some inline comments are
sufficient to communicate design and intent is not credible. [I agree with
Jef][jef-raskin-blog].
[jef-raskin-blog]: https://queue.acm.org/detail.cfm?id=1053354
The above is especially acute in the context of the extreme programming paradigm
that is Plutus development. We can and should demand more attention from a dev
engaging with the application. They should not expect to "intuitively"
understand how to interface with the code. Again - not to be justify any
obscurity of design or code, but a validator is not simply just another library
they'd be interfacing with. The stakes are too high. They must read the spec.
We suffer less from docs/code divergence than is experienced in "normal" development.
For the same reason that we have un-patch-ablity we have a fixed feature set.
It is in evolving software and its code base, by adding new features or modifying existing ones,
when code diverges from docs.
A spec helps expel ambiguity early. It provides an opportunity to check that
everyone is on the same page before any lines of code are written, and without
having to unpick lines of code after the fact.
A spec helps make the implementation stage straightforward and intentional. It
greatly reduces the required bandwidth since each part of the code has a
prescribed purpose. This is also reduces the cost of writing wrong things,
before settling on something acceptable.
Having a spec combats feature creep. Adding feature requirements part way
through implementation can lead to convoluted and design and code, and
ultimately greatly increase the chance of bugs. As discussed for on-chain code,
rolling updates are not generally possible. We need to make sure from the start
what the feature requirements are (and as importantly what they aren't).
## Summary
The principles have been arrived up over numerous projects,
most explicitly and recently while working on Cardano Lightning.
As alluded to in the introduction, these principles should ...
well, be treated more like [guidelines](https://youtu.be/k9ojK9Q_ARE).
If you comments, questions, suggested, or criticisms, please get in touch.

View File

@ -11,15 +11,15 @@ Aims:
### Motivations
The motivation for writing this came from a desire to add additional features to Aiken not yet available.
One such feature would evaluate an arbitrary function in Aiken callable from JavaScript.
This would help a lot with testing and when trying to align on and off-chain code.
The motivation for writing this came from a desire to add additional features to
Aiken not yet available. One such feature would evaluate an arbitrary function
in Aiken callable from JavaScript. This would help a lot with testing and when
trying to align on and off-chain code.
Another more pipe dreamy, ad-hoc function extraction - from a span of code, generate a function.
A digression to answer _why would this be at all helpful?!_
Validator logic often needs a broad context throughout.
How then to best factor code?
Possible solutions:
Another more pipe dreamy, ad-hoc function extraction - from a span of code,
generate a function. A digression to answer _why would this be at all helpful?!_
Validator logic often needs a broad context throughout. How then to best factor
code? Possible solutions:
1. Introduce types / structs
2. Have functions with lots of arguments
@ -32,25 +32,27 @@ The problems are:
2. Becomes tedious aligning the definition and function call.
3. Ends up with very long validators which are hard to unit test.
My current preferred way is to accept that validator functions are long.
Ad-hoc function extraction would allow for sections of code to be tested without needing to be factored out.
My current preferred way is to accept that validator functions are long. Ad-hoc
function extraction would allow for sections of code to be tested without
needing to be factored out.
To do either of these, we need to get to grips with the Aiken compilation pipeline.
To do either of these, we need to get to grips with the Aiken compilation
pipeline.
### This won't age well
Aiken is undergoing active development.
This post started life with Aiken ~v1.14.
Aiken v1.15 introduced reasonably significant changes to the compilation pipeline.
The word is that there aren't any more big changes in the near future,
but this article will undoubtedly begin to diverge from the current code-base even before publishing.
Aiken is undergoing active development. This post started life with Aiken
~v1.14. Aiken v1.15 introduced reasonably significant changes to the compilation
pipeline. The word is that there aren't any more big changes in the near future,
but this article will undoubtedly begin to diverge from the current code-base
even before publishing.
### Limitations of narrating code
Narrating code becomes a compromise between being honest and accurate, and being readable and digestible.
The command `aiken build` covers well in excess of 10,000 LoC.
The writing of this post ground to a halt as it reached deeper into the code-base.
To redeem it, some (possibly large) sections remain black boxes.
Narrating code becomes a compromise between being honest and accurate, and being
readable and digestible. The command `aiken build` covers well in excess of
10,000 LoC. The writing of this post ground to a halt as it reached deeper into
the code-base. To redeem it, some (possibly large) sections remain black boxes.
## Aiken build
@ -69,48 +71,49 @@ Tracing `aiken build`, the pipeline is roughly:
We'll pick our way through these steps
At a high level we are trying to do something straightforward: reformulate Aiken code as Uplc.
Some Aiken expressions are relatively easy to handle for example an Aiken `Int` goes to an `Int` in Uplc.
Some Aiken expressions require more involved handling, for example an Aiken `If... If Else... Else `
must have the branches "nested" in Uplc.
Aiken has lots of nice-to-haves like pattern matching, modules, and generics;
Uplc has none of these.
At a high level we are trying to do something straightforward: reformulate Aiken
code as Uplc. Some Aiken expressions are relatively easy to handle for example
an Aiken `Int` goes to an `Int` in Uplc. Some Aiken expressions require more
involved handling, for example an Aiken `If... If Else... Else ` must have the
branches "nested" in Uplc. Aiken has lots of nice-to-haves like pattern
matching, modules, and generics; Uplc has none of these.
### The Preamble
#### Cli handling
The cli enters at `aiken/src/cmd/mod.rs` which parses the command.
With some establishing of context, the program enters `Project::build` (`crates/aiken-project/src/lib.rs`),
which in turn calls `Project::compile`.
The cli enters at `aiken/src/cmd/mod.rs` which parses the command. With some
establishing of context, the program enters `Project::build`
(`crates/aiken-project/src/lib.rs`), which in turn calls `Project::compile`.
#### File crawl
The program looks for Aiken files in both `./lib` and `./validator` sub-directories.
For each it walks over all contents (recursively) looking for `.ak` extensions.
It treats these two sets of files a little differently.
For example, only validator files can contain the special validator functions.
The program looks for Aiken files in both `./lib` and `./validator`
sub-directories. For each it walks over all contents (recursively) looking for
`.ak` extensions. It treats these two sets of files a little differently. For
example, only validator files can contain the special validator functions.
#### Parse and Type check
`Project::parse_sources` parses the module source code.
The heavy lifting is done by `aiken_lang::parser::module`, which is evaluated on each file.
It produces a `Module` containing a list of parsed definitions of the file: functions, types _etc_,
together with metadata like docstrings and the file path.
`Project::parse_sources` parses the module source code. The heavy lifting is
done by `aiken_lang::parser::module`, which is evaluated on each file. It
produces a `Module` containing a list of parsed definitions of the file:
functions, types _etc_, together with metadata like docstrings and the file
path.
`Project::type_check` inspects the parsed modules and, as the name implies, checks the types.
It flags type level warnings and errors and constructs a hash map of `CheckedModule`s.
`Project::type_check` inspects the parsed modules and, as the name implies,
checks the types. It flags type level warnings and errors and constructs a hash
map of `CheckedModule`s.
#### Code generator
The code generator `CodeGenerator` (`aiken-lang/src/gen_uplc.rs`) is given
the definitions found from the previous step,
together with the plutus builtins.
It has additional fields for things like debugging.
The code generator `CodeGenerator` (`aiken-lang/src/gen_uplc.rs`) is given the
definitions found from the previous step, together with the plutus builtins. It
has additional fields for things like debugging.
This is handed over to a `Blueprint` (`aiken-project/src/blueprint/mod.rs`).
The blueprint does little more than find the validators on which to run the code gen.
The heavy lifting is done by `CodeGenerator::generate`.
This is handed over to a `Blueprint` (`aiken-project/src/blueprint/mod.rs`). The
blueprint does little more than find the validators on which to run the code
gen. The heavy lifting is done by `CodeGenerator::generate`.
We are now ready to take the source code and create plutus.
@ -119,23 +122,24 @@ We are now ready to take the source code and create plutus.
Things become a bit intimidating at this point in terms of sheer lines of code:
`gen_uplc.rs` and three modules in `gen_uplc/` totals > 8500 LoC.
Aiken has its own _intermediate representation_ called `air` (as in Aiken Intermediate Representation).
Intermediate representations are common in compiled languages.
`Air` is defined in `aiken-lang/src/gen_uplc/air.rs`.
Aiken has its own _intermediate representation_ called `air` (as in Aiken
Intermediate Representation). Intermediate representations are common in
compiled languages. `Air` is defined in `aiken-lang/src/gen_uplc/air.rs`.
Unsurprisingly, it looks a little bit like a language between Aiken and plutus.
In fact, Aiken has another intermediate representation: `AirTree`.
This is constructed between the `TypedExpr` and `Vec<Air>` ie between parsed Aiken and air.
In fact, Aiken has another intermediate representation: `AirTree`. This is
constructed between the `TypedExpr` and `Vec<Air>` ie between parsed Aiken and
air.
#### Climbing the AirTree
Within `CodeGenerator::generate`, `CodeGenerator::build` is called on the function body.
This takes a `TypedExpr` and constructs and returns an `AirTree`.
The construction is recursive as it traverses the recursive `TypedExpr` data structure.
More on what an airtree is and its construction below.
At the same time `self` is treated as `mut`, so we need to keep an eye on this too.
The method which is called and uses this mutability of self is `self.assignment`.
It does so by
Within `CodeGenerator::generate`, `CodeGenerator::build` is called on the
function body. This takes a `TypedExpr` and constructs and returns an `AirTree`.
The construction is recursive as it traverses the recursive `TypedExpr` data
structure. More on what an airtree is and its construction below. At the same
time `self` is treated as `mut`, so we need to keep an eye on this too. The
method which is called and uses this mutability of self is `self.assignment`. It
does so by
```sample
- self.assignment
@ -143,21 +147,23 @@ It does so by
└ self.code_gen_functions.insert
```
and thus is creating a hashmap of all the functions that appear in the definition.
From the call to return of `assign` covers > 600 LoC so we'll leave this as a black box.
(`self.handle_each_clause` is also called with `mut` which in turn calls `self.build` for which `mut` it is needed.)
and thus is creating a hashmap of all the functions that appear in the
definition. From the call to return of `assign` covers > 600 LoC so we'll leave
this as a black box. (`self.handle_each_clause` is also called with `mut` which
in turn calls `self.build` for which `mut` it is needed.)
Validators in Aiken are boolean functions while in Uplc they are unit-valued (aka void-valued) functions.
Thus the air tree is wrapped such that `false` results in an error (`wrap_validator_condition`).
I don't know why there is a prevailing thought that boolean functions are preferable to functions
that error if anything is wrong - which is what validators are.
Validators in Aiken are boolean functions while in Uplc they are unit-valued
(aka void-valued) functions. Thus the air tree is wrapped such that `false`
results in an error (`wrap_validator_condition`). I don't know why there is a
prevailing thought that boolean functions are preferable to functions that error
if anything is wrong - which is what validators are.
`check_validator_args` again extends the airtree from the previous step,
and again calls `self.assignment` mutating self.
Something interesting is happening here.
Script context is the final argument of a validator - for any script purpose.
`check_validator_args` treats the script context like it is an unused argument.
The importance of this is not immediate, and I've still yet to appreciate why this happens.
`check_validator_args` again extends the airtree from the previous step, and
again calls `self.assignment` mutating self. Something interesting is happening
here. Script context is the final argument of a validator - for any script
purpose. `check_validator_args` treats the script context like it is an unused
argument. The importance of this is not immediate, and I've still yet to
appreciate why this happens.
Let's take a look at what AirTree actually is
@ -172,8 +178,9 @@ pub enum AirTree {
}
```
Note that `AirStatement` and `AirExpression` are mutually recursive definitions with `AirTree`.
Otherwise, it would be unclear from first inspection how tree-like this really is.
Note that `AirStatement` and `AirExpression` are mutually recursive definitions
with `AirTree`. Otherwise, it would be unclear from first inspection how
tree-like this really is.
`AirExpression` has multiple constructors. These include (non-exhaustive)
@ -190,27 +197,26 @@ Otherwise, it would be unclear from first inspection how tree-like this really i
- pattern matching
- unwrapping data structures
Note that `AirTree` has many methods that are partial functions,
as in there are possible states that are not considered legitimate
at different points of its construction and use.
For example `hoist_over` will throw an error if called on an `Expression`.
As `AirTree` is for internal use only, the scope for potential problems is reasonably contained.
It seems likely this is to avoid similar-yet-different IRs between steps.
However, the trade off is that it partially obfuscates what is a valid state where.
Note that `AirTree` has many methods that are partial functions, as in there are
possible states that are not considered legitimate at different points of its
construction and use. For example `hoist_over` will throw an error if called on
an `Expression`. As `AirTree` is for internal use only, the scope for potential
problems is reasonably contained. It seems likely this is to avoid
similar-yet-different IRs between steps. However, the trade off is that it
partially obfuscates what is a valid state where.
What is hoisting? Hoisting gives the airtree depth.
The motivation is that by the time we hit Uplc it is "generally better"
that
What is hoisting? Hoisting gives the airtree depth. The motivation is that by
the time we hit Uplc it is "generally better" that
- function definitions appear once rather than being inlined multiple times
- the definition appears as close to use as possible
Hoisting creates tree paths.
The final airtree to airtree step, `self.hoist_functions_to_validator`, traverses these paths.
There is a lot of mutating of self, making it quite hard to keep a handle on things.
In all this (several thousand?) LoC, it is essentially ascertaining in which node of the tree
to insert each function definition.
In a resource constrained environment like plutus, this effort is warranted.
Hoisting creates tree paths. The final airtree to airtree step,
`self.hoist_functions_to_validator`, traverses these paths. There is a lot of
mutating of self, making it quite hard to keep a handle on things. In all this
(several thousand?) LoC, it is essentially ascertaining in which node of the
tree to insert each function definition. In a resource constrained environment
like plutus, this effort is warranted.
At the same time this function deals with
@ -221,47 +227,49 @@ Neither of which exist at the Uplc level.
#### Into Air
The `to_vec : AirTree -> Vec<Air>` is much easier to digest.
For one, it is not evaluated in the context of the code generator,
and two, there is no mutation of the airtree.
The function recursively takes nodes of the tree and maps them to entries in a mutable vector.
It flattens the tree to a vec.
The `to_vec : AirTree -> Vec<Air>` is much easier to digest. For one, it is not
evaluated in the context of the code generator, and two, there is no mutation of
the airtree. The function recursively takes nodes of the tree and maps them to
entries in a mutable vector. It flattens the tree to a vec.
### Down to Uplc
Next we go from `Vec<Air> -> Term<Name>`.
This step is a little more involved than the previous.
For one, this is executed in the context of the code generator.
Moreover, the code generator is treated as mutable - ouch.
Next we go from `Vec<Air> -> Term<Name>`. This step is a little more involved
than the previous. For one, this is executed in the context of the code
generator. Moreover, the code generator is treated as mutable - ouch.
On further inspection we see that the only mutation is setting `self.needs_field_access = true`.
This flag informs the compiler that, if true, additional terms must be added in one of the final steps
(see `CodeGenerator::finalize`).
On further inspection we see that the only mutation is setting
`self.needs_field_access = true`. This flag informs the compiler that, if true,
additional terms must be added in one of the final steps (see
`CodeGenerator::finalize`).
As noted above, some of the mappings from air to terms are immediate like `Air::Bool -> Term::bool`.
Others are less so.
Some examples:
As noted above, some of the mappings from air to terms are immediate like
`Air::Bool -> Term::bool`.
Others are less so. Some examples:
- `Air::Var` require 100 LoC to do case handling on different constructors.
- Lists in air have no immediate analogue in uplc
- builtins, as in built-in functions (standard shorthand), have to be mediated
with some combination of `force` and `delay` in order to behave as they should.
- user functions must be "uncurried", ie treated as a sequence of single argument functions,
and recursion must be handled
with some combination of `force` and `delay` in order to behave as they
should.
- user functions must be "uncurried", ie treated as a sequence of single
argument functions, and recursion must be handled
- Do some magic in order to efficiently allow "record updates".
#### Cranking the Optimizer
There is a sequence of operations performed on the Uplc, mapping `Term<Name> -> Term<Name>`.
This removes inconsequential parts of the logic which have been generated, including:
There is a sequence of operations performed on the Uplc, mapping
`Term<Name> -> Term<Name>`. This removes inconsequential parts of the logic
which have been generated, including:
- removing application of the identity function
- directly substituting where apply lambda is applied to a constant or builtin
- inline or simplify where apply lambda is applied to a parameter that appears once or not at all
- inline or simplify where apply lambda is applied to a parameter that appears
once or not at all
Each of these optimizing methods has a its own relatively narrow focus,
and so although there is a fair number of LoC, it's reasonably straightforward to follow.
Some are applied multiple times.
Each of these optimizing methods has a its own relatively narrow focus, and so
although there is a fair number of LoC, it's reasonably straightforward to
follow. Some are applied multiple times.
### The End
@ -269,17 +277,18 @@ The generated program can now be serialized and included in the blueprint.
### Plutus Core Signposting
All this fuss is to get us to a point where we can write Uplc - and good Uplc at that.
Note that there are many ways to generate code and most of them are bad.
The various design decisions and compilation steps make more sense
when we have a better understanding of the target language.
All this fuss is to get us to a point where we can write Uplc - and good Uplc at
that. Note that there are many ways to generate code and most of them are bad.
The various design decisions and compilation steps make more sense when we have
a better understanding of the target language.
Uplc is a lambda calculus.
For a comprehensive definition on Uplc checkout the specification found
[here](https://github.com/input-output-hk/plutus/#specifications-and-design) from the plutus GitHub repo.
(I imagine this link will be maintained longer than the current actual link.)
If you're not at all familiar with lambda calculus I recommend
[an unpacking](https://crypto.stanford.edu/~blynn/lambda/) by Ben Lynn.
Uplc is a lambda calculus. For a comprehensive definition on Uplc checkout the
specification found
[here](https://github.com/input-output-hk/plutus/#specifications-and-design)
from the plutus GitHub repo. (I imagine this link will be maintained longer than
the current actual link.) If you're not at all familiar with lambda calculus I
recommend [an unpacking](https://crypto.stanford.edu/~blynn/lambda/) by Ben
Lynn.
### What next?

View File

@ -1,5 +1,6 @@
---
title: why is building txs hard?
draft: true
---
## What is a dapp?

View File

@ -9,7 +9,7 @@
flake-root.url = "github:srid/flake-root";
};
outputs = inputs@{ flake-parts, ... }:
outputs = inputs @ { flake-parts, ... }:
flake-parts.lib.mkFlake { inherit inputs; } {
imports = [
# To import a flake module
@ -22,19 +22,28 @@
inputs.flake-root.flakeModule
];
systems = [ "x86_64-linux" "aarch64-darwin" ];
perSystem = { config, self', inputs', pkgs, system, ... }: {
perSystem =
{ config
, self'
, inputs'
, pkgs
, system
, ...
}: {
# Per-system attributes can be defined here. The self' and inputs'
# module parameters provide easy access to attributes of the same
# system.
haskellProjects.default = {
devShell = {
tools = hp: {
tools = hp:
{
fourmolu = hp.fourmolu;
hoogle = hp.hoogle;
haskell-language-server = hp.haskell-language-server;
treefmt = config.treefmt.build.wrapper;
} // config.treefmt.build.programs;
}
// config.treefmt.build.programs;
hlsCheck.enable = false;
};
autoWire = [ "packages" "apps" "checks" ]; # Wire all but the devShell
@ -51,6 +60,7 @@
programs.nixpkgs-fmt.enable = true;
programs.cabal-fmt.enable = true;
programs.hlint.enable = true;
programs.alejandra.enable = true;
# We use fourmolu
programs.ormolu.package = pkgs.haskellPackages.fourmolu;
@ -61,14 +71,20 @@
];
};
programs.prettier.enable = true;
programs.prettier = {
enable = true;
settings = {
printWidth = 80;
proseWrap = "always";
};
};
};
# Equivalent to inputs'.nixpkgs.legacyPackages.hello;
devShells.default =
let
menu = pkgs.writeShellScriptBin "menu"
menu =
pkgs.writeShellScriptBin "menu"
''
echo -e "\nCommands available: \n${
builtins.foldl' (x: y: x + " -> " + (pkgs.lib.getName y) + "\n") "" my-packages
@ -78,14 +94,18 @@
menu
build
watch
tailwind
deploy
];
tailwind = pkgs.writeShellScriptBin "tailwind" ''
tailwindcss -i ./content/css/main.css -o ./assets/css/mini.css --minify
'';
build = pkgs.writeShellScriptBin "build" ''
tailwindcss -i ./content/css/main.css -o ./content/css/mini.css --minify
${tailwind}/bin/tailwind
cabal run site -- build
'';
watch = pkgs.writeShellScriptBin "watch" ''
tailwindcss -i ./content/css/main.css -o ./content/css/mini.css --minify
${tailwind}/bin/tailwind
cabal run site -- watch
'';
deploy = pkgs.writeShellScriptBin "deploy" ''
@ -97,7 +117,8 @@
config.haskellProjects.default.outputs.devShell
config.flake-root.devShell
];
packages = with pkgs; [
packages = with pkgs;
[
caddy
nil
nodePackages_latest.vscode-langservers-extracted
@ -105,14 +126,14 @@
nodePackages_latest.typescript-language-server
haskellPackages.hakyll
zlib
] ++ my-packages;
]
++ my-packages;
};
};
flake = {
# The usual flake attributes can be defined here, including system-
# agnostic ones like nixosModule and system-enumerating ones, although
# those are more easily expressed in perSystem.
};
};
}

26
site.hs
View File

@ -6,29 +6,36 @@ import Hakyll
import System.FilePath (joinPath, replaceExtension, splitDirectories, splitExtension)
--------------------------------------------------------------------------------
myConfiguration :: Configuration
myConfiguration =
defaultConfiguration
{ destinationDirectory = "docs"
}
main :: IO ()
main = hakyll $ do
match "content/favicon.png" $ do
main = hakyllWith myConfiguration $ do
match "assets/favicon.png" $ do
route rmPrefix
compile copyFileCompiler
match "content/images/*" $ do
match "assets/images/*" $ do
route rmPrefix
compile copyFileCompiler
match "content/scripts/*" $ do
match "assets/scripts/*" $ do
route rmPrefix
compile copyFileCompiler
match "content/css/*" $ do
match "assets/css/*" $ do
route rmPrefix
compile compressCssCompiler
match "content/fonts/*" $ do
match "assets/fonts/*" $ do
route rmPrefix
compile copyFileCompiler
match "content/posts/*.md" $ do
matchMetadata "content/posts/*.md" ((Just "true" /=) . lookupString "draft") $ do
route rmPrefixMd
compile $
pandocCompiler
@ -50,11 +57,6 @@ main = hakyll $ do
>>= loadAndApplyTemplate "templates/default.html" archiveCtx
>>= relativizeUrls
-- match "content/index/*" $ do
-- compile $
-- pandocCompilerWith x
match "content/index.md" $ do
route rmPrefixMd
compile $ do

7
templates/item.html Normal file
View File

@ -0,0 +1,7 @@
<div class="text-1xl font-bold">
$title$
</div>
<div class="text-gray-800 dark:text-gray-200 mt-4">
$body$
</div>

7
templates/list.html Normal file
View File

@ -0,0 +1,7 @@
<div class="text-1xl font-bold">
$title$
</div>
<div class="text-gray-800 dark:text-gray-200 mt-4">
$body$
</div>

5
templates/lists.html Normal file
View File

@ -0,0 +1,5 @@
<div class="index flex flex-col justify-between gap-4 sm:gap-8">
$for(sections)$
$body$
$endfor$
</div>