Billion-Dollar Maybe
Verification-First Design and the Future of Trustless Systems
Stark February 2026
PART I: THE VERIFICATION DISCIPLINE
In March 2009, Tony Hoare took the stage at QCon London and made a confession:
“I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years” [Hoare, 2009].
The lesson was not about null pointers specifically. It was about the boundary between what a system guarantees and what it merely hopes. Hoare’s type system was designed to verify safety at compile time — before execution, not after. The null reference broke that guarantee. It introduced a value meaning “I don’t know” into a system that assumed it did know. A maybe masquerading as an answer.
The lesson was not about null pointers specifically. It was about the boundary between what a system guarantees and what it merely hopes.
The debt compounded silently. A null reference propagates through a program until something finally attempts to use it, and the system crashes — far from the original error, in ways that are expensive to diagnose. The mistake was not technical. It was epistemic: allowing uncertainty to flow through a system designed around certainty.
This pattern recurs wherever validity is discovered at runtime rather than established at construction. The separation of validity from execution appears across mature engineering disciplines precisely because violating it produces the same billion-dollar compounding. Compilers reject malformed programs before they run. Databases enforce constraints before commits. Aerospace systems verify flight envelopes before actuator commands. Type checkers prove properties that runtime testing could never establish.
The evidence suggests that cryptocurrency’s dominant architectures represent an aberration from this pattern. The Internet took a similar path: decentralized origins gave way to centralized consolidation, not through conspiracy but through architectural choices that created vacuums filled by intermediaries. While Bitcoin stayed true to verification-first design, subsequent protocols invented features but forgot foundations.
This essay examines the verification-first lineage, where the field abandoned it, and why Zenon’s dual-ledger architecture is necessary.
Verification before Execution
Hoare was not alone. The pioneers of programming language theory were practitioners who learned the hard way that systems fail when validity is discovered at runtime rather than established at construction.
Hoare’s own 1969 paper “An Axiomatic Basis for Computer Programming” had established the theoretical foundation: program correctness can be determined through static analysis [Hoare, 1969]. The type checker acts as an automated proof verifier, rejecting programs that fail to satisfy logical rules before a single instruction runs. The null reference was a retreat from this principle — a concession to implementation convenience that violated the guarantees the type system was supposed to provide.
Edsger Dijkstra provided the most quoted formulation of why testing alone fails: “Program testing can be used to show the presence of bugs, but never to show their absence!” [Dijkstra, 1970]. His “Notes on Structured Programming” argued that programmers must not only produce correct programs but demonstrate correctness in a convincing manner, before execution, not after. This was not perfectionism. It was engineering discipline under resource constraints. When memory was measured in kilobytes and CPU cycles were precious, you could not afford to discover errors at runtime. You designed systems where errors were structurally impossible.
Robin Milner’s ML language, which earned him the 1991 Turing Award, exemplified verification-first through type inference [Milner, 1978]. The compiler derives types automatically and catches entire categories of errors that runtime testing would miss. A well-typed program cannot produce certain classes of failures, not because someone tested all the cases, but because the structure of the language makes those failures inexpressible.
Leslie Lamport extended these principles to distributed systems. His book “Specifying Systems” opens with a simple conviction: “It’s a good idea to understand a system before building it, so it’s a good idea to write a specification of a system before implementing it” [Lamport, 2002]. The Byzantine Generals Problem, which Lamport formalized with Shostak and Pease in 1982, showed that even adversarial distributed systems can be reasoned about formally, that consensus itself can be verified as a property of the protocol, not just observed as emergent behavior [Lamport, Shostak & Pease, 1982].
The economics support this lineage. Studies have found that fixing bugs after release costs up to 100 times more than catching them during design. It is a structural property of how errors propagate through systems. An error caught by the type checker costs the time to read a compiler message. An error discovered in production costs debugging time, incident response, customer trust, and sometimes human safety. Every systems engineer who has debugged production code knows this in their bones.
These researchers designed systems under constraints familiar to anyone who has shipped embedded software: limited memory, expensive cycles, real consequences for failure. Verification-first was not idealism. It was survival. An ideal distributed ledger would preserve this separation: transactions verifiable as well-formed at construction time, not discovered valid or invalid only at execution.
From Principle to Mechanism
There is a dependency here that is easy to miss: bounded verification is not a feature. It is an outcome.
A system cannot offer bounded verification by adding a module or optimizing a pathway. Bounded verification becomes possible only when the architecture was designed from inception to make validity locally determinable. The constraints come first. The capability follows.
This dependency flows in one direction:
A design philosophy (validity must be separable from execution) constrains architectural choices (UTXO over accounts, deterministic scripts over Turing-complete execution, fixed-size commitments over unbounded state). Those architectural choices then enable a verification mechanism (clients can establish validity within explicit resource bounds). The mechanism achieves the philosophy, but only because the architecture was never allowed to violate it.
The inverse does not hold. An architecture built around global state and unbounded computation cannot later achieve bounded verification through clever engineering. The state space is already unbounded. The execution model already assumes the verifier can replay arbitrary computation. These are not inefficiencies to be optimized. They are structural commitments that foreclose the possibility.
This is why Bitcoin’s constraints were not limitations to be overcome. They were the load-bearing walls. Remove them, and you do not get a more capable Bitcoin. You get a system where lightweight verification becomes architecturally impossible, regardless of how much resources you allocate to the problem.
Bitcoin’s constraints were not limitations to be overcome. They were the load-bearing walls.
The researchers who built the verification discipline understood this asymmetry. Type systems do not add safety to unsafe languages; they constrain the language such that certain errors become inexpressible. The safety emerges from the constraint. Relax the constraint, lose the guarantee.
Bounded verification works the same way. It is not a feature of verification-first systems. It is what verification-first systems make possible.
What the Field Optimized For
The problem of lightweight verification is not neglected. Substantial engineering effort across the industry targets exactly this challenge, and has produced genuine advances.
Mina Protocol achieves constant-size blockchain proofs through recursive zk-SNARKs. The entire chain state compresses to roughly 22 kilobytes regardless of history. A new client can verify the current state in milliseconds without downloading blocks or replaying transactions. This is a remarkable achievement in proof compression.
Ethereum’s Verkle tree transition, expected to finalize in 2026, will reduce witness sizes by orders of magnitude, from megabytes under Merkle Patricia Tries to roughly 22 kilobytes per block. Stateless clients become viable. The Helios light client already demonstrates sub-second sync using sync committee signatures. The Portal Network decentralizes data availability so clients need not trust individual RPC providers.
FlyClient introduced logarithmic proof sizes for proof-of-work chains: syncing requires downloading only a sample of headers rather than all of them. Subsequent work on committee-based chains has achieved proof sizes up to 10,000 times smaller than prior approaches.
These projects optimize verification performance: smaller proofs, faster sync, reduced bandwidth. They ask: “How do we make verification cheaper?” This is valuable work. It makes verification more accessible.
But performance optimization assumes the verification task is already well-defined. It does not ask whether the scope of verification can be architecturally constrained in the first place. A client verifying Ethereum state, even with Verkle witnesses, must still reason about global state. A Mina client verifies a proof over the entire chain’s validity. The verification is cheap, but the scope remains the whole system.
The question almost no one asks: Can a protocol be designed such that a client only verifies what it needs to know, with explicit bounds on resources expended, and a legitimate option to refuse when bounds are exceeded?
Satoshi Still Wants His Two Dollars
Satoshi Nakamoto’s whitepaper Section 8 describes Simplified Payment Verification: “It is possible to verify payments without running a full network node. A user only needs to keep a copy of the block headers of the longest proof-of-work chain… and obtain the Merkle branch linking the transaction to the block” [Nakamoto, 2008]. Block headers are exactly 80 bytes, enabling SPV verification with minimal bandwidth [Bitcoin Developer Guide]. No global state. No trusted intermediary. Just mathematics.
On May 18, 2010, Satoshi wrote on BitcoinTalk: “SPV is not implemented yet, and won’t be implemented until far in the future, but all the current implementation is designed around supporting it” [Satoshi, 2010]. Read that again. The architecture was designed for bounded verification from inception. SPV was not a nice-to-have optimization. It was the plan. The entire system was built to support lightweight verification by resource-constrained clients.
The UTXO model supports local transaction verification. Each unspent output is a discrete unit verifiable independently. The Extended UTXO Model paper explains that Bitcoin chose UTXO “for good reasons, including that its semantic model stays simple in a complex concurrent and distributed computing environment” [Chakravarty et al., 2020]. Verification requires only the UTXO itself, its spending conditions, and proof of inclusion in the chain. No global state knowledge necessary.
Bitcoin Script is intentionally non-Turing complete. No loops means guaranteed termination. Stack-based design means predictable resource consumption. These are not limitations; they are design discipline. The constraints ensure that verification is bounded, that a transaction’s validity can be established without executing arbitrary computation or querying unbounded state.
Most importantly, Bitcoin separates validity from ordering. A transaction is valid the moment it correctly spends its inputs according to their locking scripts. When that transaction gets ordered into a block is a separate question, handled by a separate mechanism. This mirrors the verification discipline: establish correctness structurally, then handle sequencing as a distinct concern.
Bitcoin demonstrated that a distributed ledger could embody verification-first principles. Subsequent projects chased features (smart contracts, throughput, expressiveness) while abandoning the foundation that made trustless verification possible. The scaffolding for lightweight verification exists. Almost no one built on it.
What the Field Forgot
Ethereum looked at Bitcoin and saw limitations to overcome. The Ethereum Design Rationale documents the explicit choice: “Ethereum jettisons this scheme in favor of a simpler approach” [Ethereum Design Rationale, 2014]. The rationale lists four justifications, including simplicity, noting that implementing equivalent functionality in UTXO would be “much more complicated and ugly.”
The Design Rationale even acknowledged what was being traded away. It noted that UTXOs offer a “higher degree of privacy” and are “more theoretically compatible with certain kinds of scalability paradigms.” The verdict: “We have decided that, particularly because we are dealing with dapps containing arbitrary state and code, the benefits of accounts massively outweigh the alternatives” [Ethereum Design Rationale, 2014].
They were not wrong that accounts are easier for developers. But they did not realize the awkwardness was load-bearing. Bitcoin’s constraints — no loops, stack-based execution, UTXO atomicity, 80-byte headers — were not limitations to be overcome. They were the discipline that made trustless verification possible. Remove the constraint, remove the guarantee.
When Ethereum captured the developer mindshare through the ICO boom, that architectural choice became the template the industry reacted to. Subsequent projects either adopted the account model, optimized around its limitations, or explicitly rejected it, but none returned to Bitcoin’s verification-first foundations. The question everyone asked was “how do we make blockchain do more?” Nobody was asking “how do we preserve what Bitcoin got right?”
Solana’s Anatoly Yakovenko, a former Qualcomm engineer who understood hardware constraints, made the rejection explicit. In a 2021 interview, he described his design philosophy: “Break away from all of these chains of proof of work. This idea that, do you need this full objective verification? And just go all in. Make the fastest possible communications network” [Yakovenko, 2021]. He adopted a trust-on-first-use model drawn from internet certificate authorities. This was not ignorance. It was a conscious choice to optimize for throughput over verification.
The pattern repeated across the industry. Hedera optimized for enterprise finality, building around asynchronous Byzantine fault tolerance for institutional use cases [Baird, 2016]. Polkadot optimized for interoperability, creating a “heterogeneous multi-chain framework” where different blockchains could communicate [Wood, 2016]. Each project chased a different goal (throughput, formality, privacy, governance) and inadvertently recreated centralization pressures at different layers. The common thread was that nobody optimized for verification on constrained devices.
The collateral damage fell on users. Every architectural choice that prioritized expressiveness or speed over verification-first design transferred sovereignty from the user to an intermediary. When a phone cannot verify its own transactions, “someone else” must verify on its behalf. When light clients are “impossible” by design, users must trust RPCs they cannot audit. The field optimized for what attracted capital (features, throughput, developer experience) and quietly sidelined the verification problem. The technology that was supposed to empower people with knowledge turned into a codified belief system.
Cardano stands apart as a deliberate exception. They retained an extended UTXO model and cited the Ethereum DAO attack as justification, noting the vulnerability to recursive contract invocations that enabled that exploit [Chakravarty et al., 2020]. This class of attack simply cannot occur in Bitcoin’s design because non-Turing-complete Script forbids loops and recursion, while UTXO’s atomic consumption prevents mid-transaction state changes or callbacks. The broader industry prioritized expressiveness and speed while often dismissing Cardano’s choice as overly academic.
But even Cardano’s Extended UTXO does not preserve Bitcoin’s verification properties in practice. Light wallets still trust third-party backends [IOHK, 2021]. When DeFi launched on Cardano, the concurrency limitations were so severe that every major DEX had to introduce batchers, centralized intermediaries that collect, order, and execute transactions off-chain before submitting them to the protocol [builtoncardano.com, 2025]. As of May 2025, Minswap still uses batchers for transaction processing [learncardano.io, 2025]. The architecture that was supposed to eliminate trusted parties reintroduced them at the application layer. Extending UTXO for smart contracts introduced enough complexity that the practical benefits evaporated. The theoretical elegance did not survive contact with DeFi.
When Math Meets Reality
Cryptographers are mathematicians, and in mathematics, the verification-first problem does not exist. A proof is valid the moment it is constructed. There is no runtime. There is no network partition. There is no client that receives half a proof before connectivity fails. Mathematical objects do not degrade gracefully because they do not degrade at all.
The past two decades produced remarkable advances: zero-knowledge proofs moved from theoretical curiosity to practical deployment, threshold signatures enabled distributed key management, and polynomial commitments made succinct verification possible. These are genuine achievements.
But the questions cryptographers answer (soundness, completeness, knowledge extraction) are questions about the proof itself, not about the system that must transport, store, and verify it under resource constraints. When a protocol assumes the verifier receives the complete proof and has sufficient memory to check it, that assumption is invisible within the mathematics. It becomes visible only at deployment, when the verifier is a phone on a train with intermittent signal and limited computational resources.
In January 2026, Vitalik Buterin articulated exactly this gap:
“On a raw subconscious level, I don’t think I was sufficiently appreciative of the fact that in the real world, lots of things break. Sometimes the p2p network goes down. Sometimes the p2p network has 20x the latency you expected… Sometimes a third party service you’ve been relying on for years shuts down, and there isn’t a good alternative. If the alternative is that you personally go through a github repo and figure out how to PERSONALLY RUN A SERVER, lots of people will give up and never figure it out and end up permanently losing access to their money… If we are making a self-sovereign blockchain to last through the ages, THE ANSWER TO THE ABOVE CONUNDRUMS CANNOT ALWAYS BE ‘CALL THE DEVS’” [Buterin, 2026].
The “Mountain Man’s cabin,” as he called it, is not meant as the replacement lifestyle for everyone. It is meant as the safe place to retreat to when things go wrong. In blockchain terms, this means running a full node on capable hardware — the fallback that works when RPC providers fail, when third-party services shut down, when you have no choice but to verify everything yourself.
But running a full node is not an option for most users. The question this paper addresses is whether verification can be bounded enough to work on constrained devices — phones, browsers, embedded systems — without requiring either full-node hardware or trust in intermediaries.
Refusal semantics, bounded verification, and graceful degradation are not cryptographic problems. They are systems engineering problems that arise only when mathematical objects must survive contact with physical infrastructure.
PART II: RECOVERING THE DISCIPLINE
From Null to System
Hoare’s mistake was local: a single value in a single type system. But the pattern scales. The same failure mode recurs at the system level whenever a component produces output without establishing the conditions for that output’s correctness. A database that returns a result without verifying referential integrity. A network that routes packets without authenticating their source. An AI that generates text without any mechanism to distinguish knowledge from fabrication.
In each case, the architecture permits a maybe to propagate as if it were an answer. The downstream system cannot distinguish “verified true” from “plausibly true” from “confidently fabricated.” The uncertainty compounds silently until something fails — far from the original divergence, in ways that are expensive to diagnose and repair.
The verification discipline exists to prevent exactly this failure mode. A type checker refuses to propagate a maybe. A well-designed protocol refuses to forward an unverifiable claim. The refusal is not a limitation — it is the mechanism by which the system maintains its integrity.
The trillion-dollar question for distributed systems: what is the equivalent of the type checker’s refusal? What mechanism allows a verifier to say “I cannot establish the truth of this claim within my resource bounds” in a way that is explicit, machine-readable, and does not silently escalate to trust?
Refusal as the Missing Primitive
The verification discipline requires a third truth value. Classical logic operates in two values: true and false. But a resource-constrained verifier faces situations where neither applies. The claim may be true. The claim may be false. The verifier cannot determine which within its bounds. What should it return?
The wrong answers are familiar. Return true optimistically — and propagate unverified claims. Return false conservatively — and reject valid claims. Throw an exception — and force the caller to handle an error that is not an error. Silently delegate to a trusted third party — and reintroduce the centralization that verification was supposed to eliminate.
The right answer has formal foundations. Stephen Cole Kleene’s three-valued logic (1938) introduced a third truth value for propositions whose truth cannot be determined — not “unknown” as a placeholder, but “undefined” as a legitimate semantic category [Kleene, 1938]. Jan Lukasiewicz’s earlier work on many-valued logics (1920) established that bivalence is a choice, not a necessity [Lukasiewicz, 1920]. SQL’s NULL semantics, despite their well-documented problems, represent an industrial attempt to grapple with the same issue: what happens when a value is neither present nor absent but simply indeterminate [Date, 2009].
Applied to verification, the three values become:
- TRUE: The claim is verified within the verifier’s resource bounds.
- FALSE: The claim is provably invalid — the proof fails, the signature doesn’t match, the Merkle root contradicts.
- REFUSED: The verifier cannot complete verification within its declared bounds.
Refusal is not an error. It is a correct and safe response. A verifier that refuses has not accepted any unverified claim. It has maintained its integrity by acknowledging the limits of what it can establish. Nancy Leveson’s safety engineering framework provides the independent principle: “Safety is an emergent property that arises from the interactions among system components… Safety constraints must be identified and enforced” [Leveson, 2011]. A system whose default state is refusal when verification cannot complete is safer than one whose default is acceptance or delegation.
The critical property is that refusal propagates. When multiple verification checks are composed, if any sub-check refuses, the composite refuses. No amount of successful sub-verifications can override a single refusal. This prevents partial verification from being mistaken for complete verification — the precise failure mode that occurs when systems stack mitigations without tracking which checks actually passed.
A refusing verifier is correct by construction. It may fail to answer — but it never lies.
When Proofs Become Economic Objects
Refusal creates scarcity. When verifiers can legitimately refuse, verification capacity becomes finite. Not every claim can be verified by every participant. This transforms proofs from mathematical objects into economic ones.
Friedrich Hayek’s insight about prices applies: “The mere fact that there is one price for any commodity… brings about the solution which… might have been arrived at by one single mind possessing all the information” [Hayek, 1945]. Prices aggregate distributed knowledge into a signal that coordinates behavior without central planning. The same mechanism operates on proofs when verification is scarce.
A proof has cost: computation to generate, bandwidth to transmit, storage to retain. A proof has value: the trust it establishes, the decisions it enables. A proof has lifespan: it remains verifiable only while its reference data persists within verifiers’ retention windows. These properties make proofs tradeable.
Gennaro, Gentry, and Parno (2010) formalized verifiable computation: a weak client can outsource computation to untrusted workers and verify the result with “substantially less computational effort than computing F from scratch” [Gennaro, Gentry & Parno, 2010]. Parno et al. (2013) demonstrated this is practical: Pinocchio “improves verification time by 5–7 orders of magnitude” [Parno et al., 2013]. When verification is a million times cheaper than production, a market becomes viable. Producers generate proofs; consumers verify them; the price reflects the cost of generation and the value of the trust established.
The refusal signal creates the demand side. When a verifier returns REFUSED_DATA_UNAVAILABLE, it is announcing: “I would verify this claim if I had the proof. I do not have the proof.” This is a buy signal. Someone who can supply that proof can extract value from the refusal. The cluster of refusals becomes a map of unmet demand — “hot spots” in the verification economy where proof supply lags proof demand.
Markets for proofs emerge spontaneously because verification scarcity creates the conditions for trade:
- Sellers: Participants who hold or can produce proofs.
- Buyers: Verifiers seeking to validate claims.
- Verification: Immediate upon receipt — no counterparty trust required.
- Fraud: Limited to selling invalid or expired proofs, which are detected instantly.
The market models are familiar from other domains: spot delivery (pay per proof), subscriptions (regular access to verified bundles), latency arbitrage (pay more for faster delivery). What distinguishes proof markets is that the goods are self-authenticating. A buyer can verify a proof’s validity without trusting the seller. This eliminates the information asymmetry that plagues most markets for credence goods.
Binary Validity and the Gradient Toward Truth
The crucial distinction between proof markets and other information markets is the nature of the goods being traded. In most markets for information — news, analysis, predictions, recommendations — quality is continuous and subjective. A report can be more or less accurate, more or less useful, more or less persuasive. The buyer cannot easily distinguish a good report from a plausible-sounding bad one. This creates the conditions for markets in misinformation, where producers optimize for plausibility rather than truth.
Cryptographic proofs are categorically different. A proof either verifies or it does not. Validity is binary and mechanically checkable. Goldwasser, Micali, and Rackoff’s foundational work on zero-knowledge proofs established the formal framework: a proof system has completeness (valid statements have convincing proofs) and soundness (invalid statements do not) [Goldwasser, Micali & Rackoff, 1985]. The soundness property is what matters here: no adversary can produce a proof that verifies for a false statement, except with negligible probability.
There is no “plausible proof.” A proof that fails verification is worthless regardless of how sophisticated it appears. A proof that passes verification is valid regardless of who generated it or what their reputation might be. Lamport, Shostak, and Pease proved in their Byzantine Generals paper that with cryptographic authentication (“unforgeable written messages”), agreement is achievable “for any number of generals and possible traitors” [Lamport, Shostak & Pease, 1982]. The proof’s validity depends on mathematics, not on the identity or trustworthiness of its source.
This creates a fundamentally different economic gradient. In markets for plausible information, producers optimize for what buyers find convincing — which may or may not correlate with truth. In markets for cryptographic proofs, producers can only compete by producing valid proofs. An efficiently-generated false proof is worthless; an expensively-generated true proof is still true. The gradient points toward truth because validity is the only dimension on which competition occurs.
Three properties follow:
Error detection is computationally trivial. Detecting an invalid proof requires running a verification algorithm that completes in milliseconds. The asymmetry between production and verification (5–7 orders of magnitude) means buyers can check proofs for a millionth of the cost of producing them. Fraud is not merely punishable — it is immediately, automatically, universally detectable.
Validity is source-agnostic. A proof generated by an unknown participant is as valid as one generated by a famous institution, if both pass verification. This eliminates the trust hierarchies that characterize markets for non-verifiable information, where reputation serves as a (poor) proxy for accuracy.
Error does not compound. An invalid proof is rejected at the point of verification. It does not propagate. This contrasts with markets for plausible information, where errors in early reports propagate through subsequent analysis, and confidence increases even as accuracy degrades.
The gradient toward truth is structural, not incentivized. It emerges from the nature of the goods being traded, not from reward mechanisms layered on top. A proof market does not need to incentivize honesty — it makes dishonesty unprofitable by making it immediately detectable.
Compositional Soundness
The deepest consequence of binary validity is compositional. When claims compose — when the output of one verification becomes the input to another — the nature of the underlying claims determines whether the composition preserves or degrades correctness.
Plausible claims do not compose into correct systems. If each step in a reasoning chain has a 90% probability of being correct, a ten-step chain has a joint probability of approximately 35%. At twenty steps, it falls below 12%. More importantly, the failures are silent. Neither producer nor consumer can identify which steps are correct and which are errors, because plausibility provides no compositional guarantee. This is the fundamental problem with systems built on unverified claims: stacking them produces combinatorial expansion of potential error, with no mechanism to localize or contain it.
Verified claims compose soundly. If each step is accompanied by a cryptographic proof of specific correctness properties, and each proof is independently checked, then the composition of verified steps preserves the verified properties. Canetti’s Universal Composability framework formalizes this for cryptographic protocols: a protocol proven secure in the UC framework remains secure when composed with other UC-secure protocols [Canetti, 2001]. Reynolds’ separation logic provides the analogous framework for program verification: proofs about separate parts of a program compose into proofs about the whole [Reynolds, 2002]. The mathematics of composition is well-established.
The refusal semantics described earlier ensure that composition fails safely. If any step in a verified chain cannot be checked — because the proof is unavailable, because verification exceeds resource bounds, or because the claim falls outside the verifier’s declared scope — the composite verification refuses rather than proceeding with partial assurance. The consumer knows precisely which steps are verified and which are not. The system degrades explicitly, not silently.
This is the structural sense in which verification-first architecture is truth-seeking. Not that it makes participants more honest — their honesty is irrelevant. Not that it incentivizes truth-telling — incentives can be gamed. But that the infrastructure creates an environment where truth is the optimization target, error is immediately detectable, and composition preserves rather than degrades correctness. The system seeks truth because truth is what the verification layer rewards, in the same way that a type-checked program seeks type safety because type safety is what the compiler enforces.
The Structural Contrast
The characteristics of an ideal verification-first ledger have emerged throughout this analysis: transactions valid at construction, ordering separated from validity, light client verification without trusted intermediaries, verification costs independent of global state, and refusal as a correctness-preserving outcome. The question is whether any architecture actually delivers these properties — and if so, why structurally.
Buterin articulated the core constraint in January 2026:
“The scaling hierarchy in blockchains: Computation > data > state. Computation is easier to scale than data. You can parallelize it, require the block builder to provide all kinds of ‘hints’ for it, or just replace arbitrary amounts of it with a proof of it. Data is in the middle… State is the hardest. To guarantee the ability to verify even one transaction, you need the full state. If you replace the state with a tree and keep the root, you need the full state to be able to update that root. There are ways to split it up, but they involve architecture changes, they are fundamentally not general-purpose. Hence, if you can replace state with data (without introducing new forms of centralization), by default you should seriously consider it” [Buterin, 2026].
State is the hardest. This is the structural claim. Consider what happens when a light client wants to verify an account’s balance.
Ethereum’s account model stores all balances in a global state trie. The state root in each block header commits to every account simultaneously. To prove account A has balance X, you need:
- The block header containing the state root
- A Merkle Patricia Trie proof from that root to account A’s storage slot
- The proof includes sibling hashes at each level of the trie
The trie depth scales with the total number of accounts — currently over 250 million on Ethereum mainnet. The proof size scales with global state. More accounts means deeper proofs. A light client verifying one account still touches a data structure that commits to all accounts. This is not an implementation detail; it is a structural property of how account models commit to state. The Verkle tree transition improves proof size but does not change the structural dependency: verification of any account requires a proof against global state.
Zenon’s account-chain model [OPERATIONAL: running on mainnet since November 2021] stores each account’s state in its own chain. The Momentum chain commits to account-chain frontiers — the hash of each account’s latest block — without committing to a global state trie.
To prove account A has balance X:
- The Momentum header containing the commitment root
- A proof that A’s frontier hash is included in that commitment — this scales with accounts touched in that Momentum, not total accounts
- The account-chain block itself, which contains the balance
The structural difference: verifying account A requires no reference to accounts B, C, D… The proof scales with accounts active in that time window, not total accounts in the system. A client interested only in account A downloads only A’s chain plus Momentum headers. This is not an optimization applied to a global-state architecture; it is a consequence of not having global state.
The Zenon Greenpaper formalizes this as:
Verification cost scales with local interest, not global chain growth.
This is why “verification scope” is categorically different from “verification performance.” Ethereum with Verkle trees makes proofs against global state cheaper. Zenon makes proofs against global state unnecessary for single-account verification.
Bounded Verification and Refusal
The account-chain structure enables a further property: explicit resource bounds with refusal semantics [SPECIFIED: formal design complete, implementation in progress].
Each verifier declares what it can handle: how much storage, how much bandwidth, how much computation. These are concrete numbers — megabytes, kilobytes per second, operations per request. A verification request that would exceed any bound returns REFUSED rather than failing or silently delegating.
The Greenpaper specifies three refusal codes:
REFUSED_OUT_OF_SCOPE — data outside retention window REFUSED_DATA_UNAVAILABLE — proof not present REFUSED_COST_EXCEEDED — verification would exceed declared limits
Refusal propagates through composition. If a composite verification includes any sub-verification that refuses, the composite refuses. No partial verification masquerades as complete verification.
This is structurally impossible in architectures where verification requires global state access. If proving account A’s balance requires traversing a global trie, you cannot refuse mid-traversal and maintain correctness — you either complete the proof or you have nothing. The account-chain model permits principled partial verification because accounts are structurally independent.
Applying the Architecture
Within this structural framework, Zenon’s implementation separates into operational, specified, and theoretical layers:
OPERATIONAL (code running on mainnet, independently verifiable):
Dual-ledger architecture |- Block-lattice for account-chains |- Momentum chain for ordering/commitment |- Each account maintains independent transaction history |- Momentums commit to account-chain frontiers via ChangesHash
Feeless transactions via plasma/QSR mechanics Pillar consensus producing Momentums Network processing transactions since November 2021
The code is public at github.com/zenon-network/go-zenon. These claims can be verified by running a node and inspecting the data structures.
SPECIFIED (formal design complete, not yet deployed):
Bounded verification with explicit resource constraints |- Verifiers declare resource envelopes |- Refusal codes for exceeded bounds |- Refusal propagation in composite verification
Sentinels as proof-serving infrastructure |- Nodes that package and serve proof bundles |- Respond to light client queries |- Economic incentives via refusal-density signals
Composable external verification (Bitcoin SPV) |- Verify Bitcoin state using same predicate interface |- Account-chains can depend on Bitcoin proofs
Browser-native light client architecture |- Sync Momentum headers only |- Request proof bundles on demand |- Verify locally within declared bounds
The specifications are public at github.com/TminusZ/zenon-developer-commons. These represent architectural commitments backed by formal definitions, not shipping features.
THEORETICAL (research direction, not yet specified):
Proof markets with economic equilibrium analysis Extension chains and modular execution zApps beyond current embedded contracts Proof-agnostic verification interfaces
Why This Cannot Be Retrofitted
The structural contrast explains why bounded verification cannot be added to global-state architectures through clever engineering.
Ethereum’s state trie is a Merkle Patricia Trie where:
- Every account is a leaf
- The root commits to all leaves simultaneously
- Modifying any account changes the root
- Proving any account requires a path from root to leaf
This is the data structure. Verkle trees change the branching factor and proof size, but preserve the property that the root commits to everything. A light client verifying account A still receives a proof against a root that commits to accounts A through Z.
Zenon’s Momentum commits to account-chain frontiers via ChangesHash:
- Each account-chain is an independent append-only log
- ChangesHash commits to the set of frontiers touched in that Momentum
- Accounts not touched in that Momentum are not in that commitment
- Proving account A requires only A’s frontier and its inclusion proof
This is also a data structure choice — but one that preserves independence between accounts rather than coupling them through a shared root.
The consequence: verification scope in Ethereum scales with total accounts regardless of query. Verification scope in Zenon scales with accounts touched, plus a fixed cost per account-chain block of interest. For a client following one account, the difference is orders of magnitude.
This is not something Ethereum can add. The global state trie is the architecture. Removing it would be a different system.
The Work Ahead
This essay argues that Zenon’s architecture enables bounded verification. It does not claim bounded verification is deployed.
The structural argument is:
- Account-chain independence eliminates global state dependency [OPERATIONAL]
- Momentum commitments scale with touched accounts, not total accounts [OPERATIONAL]
- Therefore bounded verification is architecturally possible [STRUCTURAL CONSEQUENCE]
- The Greenpaper specifies how to implement it [SPECIFIED]
- Implementation will follow, or it won’t
The gap between specification and deployment is where the work remains. A reader evaluating these claims should:
- Run a node and inspect the account-chain data structures
- Read the Greenpaper’s formal definitions
- Judge whether the structural argument holds
- Wait for implementation to confirm or refute
Verification-first applies to this essay too. Don’t trust. Verify.
Future Directions
The verification-first framework suggests several avenues for further investigation.
The application horizon: AI and IoT. The internet is about to be flooded with two categories of unverifiable claims: AI-generated content that cannot self-verify its own correctness, and IoT sensor data that cannot prove its own provenance. Both produce plausible outputs at scale. Both lack native mechanisms for distinguishing truth from fabrication. Both would benefit from a substrate that makes verification an architectural primitive rather than an afterthought. A subsequent paper will examine how verification-first architecture applies to these specific domains.
Transport layer independence. The verification-first model requires eventual delivery of tamper-evident sequences, not the TCP/IP stack specifically. Extending the transport layer to mesh networks, delay-tolerant protocols, or pure P2P gossip could enable deployment in resource-constrained or partition-prone environments while preserving the core sequencing requirement. The key constraint is cryptographic structure, not synchronous connectivity.
Proof distribution economics. The economics of proof markets warrant formal analysis: pricing mechanisms, caching incentives, equilibrium conditions under adversarial withholding. When does a proof market clear? What determines the price of verification? How do refusal gradients shape network topology?
Post-quantum verification primitives. The current narrow waist assumes collision-resistant hash functions and ECDSA/Schnorr signatures remain secure. Transitioning to post-quantum primitives (hash-based signatures, lattice constructions) while maintaining bounded verification costs presents an open design challenge.
References
Historical/Foundational:
[Canetti, 2001] Canetti, R. “Universally Composable Security: A New Paradigm for Cryptographic Protocols.” Proceedings of the 42nd IEEE Symposium on Foundations of Computer Science, 2001.
[Date, 2009] Date, C.J. SQL and Relational Theory: How to Write Accurate SQL Code. O’Reilly Media, 2009.
[Dijkstra, 1970] Dijkstra, E.W. “Notes on Structured Programming.” Report EWD 249, Technological University Eindhoven, August 1970.
[Gennaro, Gentry & Parno, 2010] Gennaro, R., Gentry, C., and Parno, B. “Non-interactive Verifiable Computing: Outsourcing Computation to Untrusted Workers.” Advances in Cryptology — CRYPTO 2010, LNCS vol 6223. Springer.
[Goldwasser, Micali & Rackoff, 1985] Goldwasser, S., Micali, S., and Rackoff, C. “The Knowledge Complexity of Interactive Proof-Systems.” Proceedings of the 17th Annual ACM Symposium on Theory of Computing, 1985.
[Hayek, 1945] Hayek, F.A. “The Use of Knowledge in Society.” American Economic Review, 35(4), 1945.
[Hoare, 1969] Hoare, C.A.R. “An Axiomatic Basis for Computer Programming.” Communications of the ACM, 12(10), 576-580, 1969.
[Hoare, 2009] Hoare, C.A.R. “Null References: The Billion Dollar Mistake.” QCon London, March 2009.
[Kleene, 1938] Kleene, S.C. “On Notation for Ordinal Numbers.” Journal of Symbolic Logic, 3(4), 1938.
[Lamport, 2002] Lamport, L. Specifying Systems: The TLA+ Language and Tools for Hardware and Software Engineers. Addison-Wesley, 2002.
[Lamport, Shostak & Pease, 1982] Lamport, L., Shostak, R., and Pease, M. “The Byzantine Generals Problem.” ACM Transactions on Programming Languages and Systems, 4(3), 382-401, July 1982.
[Leveson, 2011] Leveson, N.G. Engineering a Safer World: Systems Thinking Applied to Safety. MIT Press, 2011.
[Lukasiewicz, 1920] Lukasiewicz, J. “On Three-Valued Logic.” Ruch Filozoficzny, 5, 1920.
[Milner, 1978] Milner, R. “A Theory of Type Polymorphism in Programming.” Journal of Computer and System Sciences, 17(3), 348-375, 1978.
[Parno et al., 2013] Parno, B., Howell, J., Gentry, C., and Raykova, M. “Pinocchio: Nearly Practical Verifiable Computation.” IEEE Symposium on Security and Privacy, 2013.
[Reynolds, 2002] Reynolds, J.C. “Separation Logic: A Logic for Shared Mutable Data Structures.” Proceedings of the 17th Annual IEEE Symposium on Logic in Computer Science, 2002.
Bitcoin:
[Nakamoto, 2008] Nakamoto, S. “Bitcoin: A Peer-to-Peer Electronic Cash System.” 2008. bitcoin.org/bitcoin.pdf
[Satoshi, 2010] Satoshi Nakamoto. BitcoinTalk post on SPV design intent. May 18, 2010. satoshi.nakamotoinstitute.org/quotes/bitcoin-design/
[Bitcoin Wiki, Script] Bitcoin Wiki. “Script.” en.bitcoin.it/wiki/Script
[Bitcoin Developer Guide] Bitcoin.org. “Block Headers.” developer.bitcoin.org/reference/block_chain.html
Ethereum:
[Ethereum Design Rationale, 2014] Ethereum Foundation. “Design Rationale.” ethereumbuilders.gitbooks.io/guide/content/en/design_rationale.html
[Buterin, 2026] Buterin, Vitalik. Twitter/X threads on blockchain scaling and real-world constraints. January 25-26, 2026.
Post-Bitcoin Architectures:
[Yakovenko, 2021] Yakovenko, Anatoly. Interview on Acquired Podcast, July 18, 2021. acquired.fm/episodes/solana-with-ceo-anatoly-yakovenko
[Wood, 2016] Wood, Gavin. “Polkadot: Vision for a Heterogeneous Multi-Chain Framework.” November 2016. assets.polkadot.network/Polkadot-whitepaper.pdf
[Baird, 2016] Baird, Leemon. “The Swirlds Hashgraph Consensus Algorithm: Fair, Fast, Byzantine Fault Tolerance.” Swirlds Technical Report SWIRLDS-TR-2016-01, 2016.
Cardano:
[Chakravarty et al., 2020] Chakravarty, M.M.T., Chapman, J., MacKenzie, K., Melkonian, O., Peyton Jones, M., and Wadler, P. “The Extended UTXO Model.” Financial Cryptography and Data Security, FC 2020. iohk.io/en/research/library/papers/the-extended-utxo-model/
[IOHK, 2021] IOHK. “Daedalus wallet and Yoroi wallet overview.” iohk.zendesk.com/hc/en-us/articles/360026058573
[builtoncardano.com, 2025] Built on Cardano. “Cardano DEX.” builtoncardano.com/ecosystem/dex
[learncardano.io, 2025] Learn Cardano. “Understanding Batchers on Minswap.” May 2025. learncardano.io/topic/understanding-batchers-on-minswap/
Zenon:
[Zenon Greenpaper, 2026] Zenon Developer Commons. “A Verification-First Architecture for Dual-Ledger Systems.” January 2026. github.com/TminusZ/zenon-developer-commons
[Zenon Orangepaper, 2026] Zenon Developer Commons. “Bitcoin Integration and Bounded SPV.” January 2026. github.com/TminusZ/zenon-developer-commons