Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Preface

Petri nets were invented in 1962. They predate Unix, the internet, and object-oriented programming. For most of their history, they lived in academic papers — a formalism known to theorists but invisible to working programmers.

This book argues they deserve wider use. Not because they’re elegant (they are) but because they solve practical problems. A Petri net is a state machine that handles concurrency. It’s a workflow engine with formal guarantees. It’s a simulation model that converts to differential equations. It’s a specification that can be verified, compiled to code, and proven in zero knowledge.

The pflow ecosystem makes this practical. You draw a net in a browser editor, simulate it, generate a running application, and verify it across implementations — all from the same JSON-LD model. The formalism provides the guarantees. The tools make the formalism accessible.

Who This Book Is For

You know how to program but haven’t encountered Petri nets before — or you’ve seen the formalism and wondered what it’s good for in practice. Either way, this book meets you where you are.

The examples are in Go and JavaScript. The mathematics uses standard notation — vectors, matrices, differential equations. When a concept requires math, the math appears alongside working code so you can verify every claim by running it.

How to Read This Book

There are two paths through the material:

  • Theory track: Read Parts I and III for the mathematical foundations and advanced topics — formal definitions, incidence matrices, ODE solvers, zero-knowledge proofs, category theory.
  • Hands-on track: Skim Part I for vocabulary, then dive into Part II’s worked examples (coffee shops, games, biochemistry) and Part IV’s tooling guides.

Both tracks converge — the theory explains why the tools work, and the tools make the theory tangible.

Part I: Foundations

Chapters 1-4 introduce the formalism. Places, transitions, arcs, tokens — the four primitives. Then firing rules, incidence matrices, and conservation laws. Then the key insight: converting discrete nets to continuous ODE systems via mass-action kinetics. Finally, the token language DSL for defining nets with guards and typed arcs.

Part II: Applications

Chapters 5-10 apply the formalism to six domains: resource management (coffee shop), game mechanics (tic-tac-toe), constraint satisfaction (sudoku), combinatorial optimization (knapsack), biochemistry (enzyme kinetics), and complex state machines (Texas Hold’em). Each chapter builds a complete model from scratch, simulates it, and analyzes the results.

Part III: Advanced Topics

Chapters 11-14 push the formalism further. Process mining discovers nets from event logs. Zero-knowledge proofs verify transitions without revealing state. Exponential weights encode scoring systems. JSON-LD makes nets self-describing and content-addressable.

Part IV: Building with pflow

Chapters 15-18 cover the ecosystem tools. The visual editor for designing nets interactively. The code generator that turns models into full-stack applications. The Go library’s API and package structure. And the dual-implementation discipline that keeps Go and JavaScript in sync.

The pflow Ecosystem

The projects referenced throughout this book form a unified ecosystem:

ProjectPurposeURL
go-pflowCore library — ODE simulation, reachability, process mininggithub.com/pflow-xyz/go-pflow
pflow.xyzVisual browser-based editor for designing netspflow.xyz
petri-pilotCode generator — turns net models into running applicationspilot.pflow.xyz

All three share the same JSON-LD model format. A net designed in the editor can be simulated by the library and compiled by the code generator without format conversion. The model is the source of truth.

Conventions

  • Code examples appear inline for short snippets and in separate files for longer programs
  • Mathematical notation uses standard linear algebra: bold lowercase for vectors (), uppercase for matrices ()
  • Cross-references link between chapters: “the coffee shop model (Chapter 5)” or “mass-action kinetics (Chapter 3)”
  • Go code uses the go-pflow library API; JavaScript uses the pflow.xyz browser modules
  • JSON-LD snippets use @context: "https://pflow.xyz/schema" throughout

Acknowledgments

This book grew from blog posts, documentation, and working code written over two years. The pflow ecosystem is open source, and the ideas draw on sixty years of Petri net theory — from Carl Adam Petri’s 1962 dissertation through the process mining work of Wil van der Aalst and the zero-knowledge proof systems enabled by gnark.

The best way to learn is to build. Open pflow.xyz in a browser, draw a net, and press play. The mathematics follows.

Why Petri Nets?

Learning objective: Understand what Petri nets offer over flowcharts, state machines, and ad-hoc modeling.

You already know how to model systems. You’ve drawn flowcharts on whiteboards, sketched state machines in design docs, and built ad-hoc abstractions in code. These tools work — until they don’t. This chapter is about what breaks, and what Petri nets offer instead.

The Problem with Informal Models

Every software project starts with a model, whether the team knows it or not. Someone draws boxes and arrows on a whiteboard. Someone else writes a state machine in an enum. A third person embeds the workflow directly in if-else chains. All three believe they’re describing the same system.

They usually aren’t.

Informal models fail in predictable ways:

Flowcharts can’t express concurrency. A flowchart is a sequence with branches. It handles “do A then B” and “do A or B” well enough. But “do A and B at the same time, then wait for both to finish” requires awkward workarounds — swim lanes, join bars, footnotes. Real systems are concurrent. Flowcharts pretend otherwise.

State machines explode. A state machine with boolean variables has possible states. A system tracking 10 independent conditions has 1,024 states. Most of those states are meaningless combinations, but the machine doesn’t know that. You end up either enumerating impossible states or pretending they can’t happen.

Ad-hoc models can’t be analyzed. When the workflow lives in code — scattered across handlers, middleware, and database triggers — there’s no way to ask “can this system deadlock?” or “is this queue bounded?” You find out in production, at 3am.

The root problem is the same in each case: the model doesn’t capture the structure of the system it represents. Concurrency is an afterthought. Resources are invisible. State is implicit. You can’t analyze what you haven’t modeled.

A Brief History

In 1962, Carl Adam Petri submitted his doctoral dissertation, Kommunikation mit Automaten (Communication with Automata), to the Technical University of Darmstadt. He was 36 years old, and he had been thinking about the problem for over a decade — his first sketches of what would become Petri nets date to 1939, when he was thirteen.

Petri’s insight was deceptively simple: concurrent systems need a model where multiple things can be true at the same time. State machines force you to enumerate every combination of conditions as a single global state. Petri’s nets decompose the state into independent pieces — places — each holding their own local truth. The interactions between these pieces are explicit in the structure of the net.

The formalism found its first serious applications in hardware design, where concurrency is unavoidable. By the 1980s, Petri nets had become a standard tool in manufacturing, telecommunications, and workflow modeling. They remain the mathematical foundation for business process notations like BPMN, even though most practitioners never see the underlying net.

Petri spent his career at the GMD (German National Research Centre for Computer Science) in Bonn, refining the theory and advocating for precise models of concurrent systems. He died in 2010. The formalism he created — places, transitions, and the flow of tokens between them — has proven remarkably durable. Sixty years later, no one has found a simpler model that handles concurrency, synchronization, and resource tracking in a single, analyzable structure.

Places, Transitions, Arcs, Tokens

A Petri net has exactly four kinds of things:

Places are drawn as circles. A place represents a condition that can be true or false, a resource that can be available or consumed, a state that a piece of the system can be in. Think of a place as a container.

Tokens are drawn as dots inside places. Tokens represent the current state of the system: which conditions hold, how many resources are available, where entities are in a process. The distribution of tokens across all places is called the marking — it’s the complete snapshot of the system at a moment in time.

Transitions are drawn as rectangles (or bars). A transition represents an event, an action, something that happens. Registration. Brewing. Firing. A transition changes the state of the system by moving tokens around.

Arcs are arrows connecting places to transitions and transitions to places. An arc from a place to a transition means “this transition needs a token from this place.” An arc from a transition to a place means “this transition produces a token in this place.” Arcs define the flow — who needs what, and who produces what.

That’s it. Four primitives. No inheritance hierarchies, no configuration languages, no plugin architectures. Every Petri net ever drawn — from a traffic light to a biochemical pathway to a poker game — is built from these four elements.

The Firing Rule

The one rule that makes the whole thing work:

A transition fires when every input place has at least one token. When it fires, it removes one token from each input place and adds one token to each output place.

This single rule gives you sequencing (token moves from A to B), concurrency (two transitions can fire independently if they don’t share input places), synchronization (a transition with two input places waits for both), and resource management (a place with tokens represents available resources).

Weighted Arcs

Sometimes a transition needs more than one token from a place, or produces more than one. A number on an arc — its weight — specifies how many tokens are consumed or produced. An arc with weight 3 from a place to a transition means “this transition needs 3 tokens from this place to fire.”

Your First Net: A Traffic Stoplight

A traffic light cycles through three states: green, yellow, red. At any moment, exactly one light is on. Here’s the Petri net:

    *
   [Green]--> [go_yellow] --> [Yellow]--> [go_red] --> [Red]--> [go_green]
      ^                                                              |
      +--------------------------------------------------------------+

Traffic light Petri net — one token cycles through three places

Three places: Green, Yellow, Red. Three transitions: go_yellow, go_red, go_green. One token, starting in Green.

The token’s position tells you which light is on. Fire go_yellow and the token moves from Green to Yellow. Fire go_red and it moves from Yellow to Red. Fire go_green and it cycles back to Green.

This is almost trivially simple — a state machine would work just as well. But notice what the Petri net gives you for free:

Conservation. There is exactly one token in the system. No sequence of firings can create a second token or destroy the existing one. The net’s structure guarantees that exactly one light is on at all times. You don’t need to assert this as a runtime check — it’s a mathematical property of the topology.

No impossible states. A state machine for this traffic light has three valid states and could, through a bug, reach an invalid one. The Petri net has no representation for “green and red are both on.” The structure makes it unrepresentable, not just unlikely.

Composability. Want to model an intersection? Add a second traffic light net. Connect them with a coordination mechanism — a shared place that ensures one direction is red when the other is green. Each light keeps its internal structure; the coordination is explicit in the arcs between them.

An Intersection

Two lights need to coordinate. We add a shared place Intersection_Free that acts as a mutual exclusion lock:

Light A: [Red_A] <-- [go_red_A] <-- [Green_A]
                                        ^
                              [Intersection_Free] *
                                        v
Light B: [Red_B] --> [go_green_B] --> [Green_B]

Intersection mutual exclusion — a shared token prevents both lights from being green

The token in Intersection_Free is consumed when one light turns green and returned when it turns red. Only one light can be green at a time — not because of a runtime check, but because there’s only one token and two transitions competing for it.

This pattern — a shared resource place controlling access — appears everywhere in Petri nets. It models mutexes, semaphores, limited capacity, and turn-taking. You’ll see it again in every chapter of this book.

Small Models, Not Large Language Models

The word “model” has been colonized. Say it in a meeting and people hear “large language model” — billions of parameters, inscrutable weights, probabilistic outputs. There’s another kind of model: small, executable, and inspectable all the way down.

Petri nets are small models. A coffee shop with five ingredients and three recipes is about 15 places and 8 transitions. A complete poker game — Texas Hold’em with all betting rounds — is under 50 places. A Sudoku solver is larger but still finite and fully visible.

Small models offer properties that large models cannot:

PropertyLarge Language ModelsPetri Nets
InspectableNoYes — read the topology
ComposableAwkwardNative — connect shared places
Provable propertiesNoYes — invariants, reachability, liveness
ExecutableVia code generationDirectly — tokens flow
DeterministicNoYes, given the same firing sequence

This matters practically, not just philosophically. A Petri net model lets you answer questions before writing implementation code:

Reachability: “Given this initial state, can the system reach that target state?” This tells you whether a workflow is completable, whether a game has a winning move, whether an order can be fulfilled from current inventory.

Boundedness: “Does the system stay within limits?” This tells you whether a queue can overflow, whether a resource pool is properly managed, whether your system is closed or leaking.

Liveness: “Can the system deadlock?” This tells you whether every transition can eventually fire again, or whether some paths lead to permanent stuckness.

These aren’t abstract concerns. They’re the bugs that take down production systems. The difference is that with a Petri net, you can check them statically — before the system runs, before you’ve written the implementation, before the 3am page.

Models as Specification

Here’s the workflow this book advocates:

  1. Sketch the net — places for states and resources, transitions for events
  2. Simulate — run the ODE solver to see if tokens flow the way you expect
  3. Analyze — check invariants, reachability, boundedness
  4. Generate code — derive the implementation from the validated model

The model becomes the specification. The code becomes a derived artifact. When the model changes, the code changes with it. When there’s a disagreement between model and implementation, the model wins — because the model is what you analyzed and proved correct.

This is the opposite of how most software is built, where the code is the model and the specification exists (if at all) as outdated documentation. With Petri nets, the model is a living, executable, analyzable artifact that stays authoritative throughout the project’s life.

The rest of this book shows what this looks like in practice — from a coffee shop inventory system to a zero-knowledge proof of a game move, all built from places, transitions, arcs, and tokens.

Try it live: See the Stoplight for a cyclic state machine or the Dining Philosophers for mutual exclusion — both generated from Petri net models at pilot.pflow.xyz.

The Mathematics of Flow

Learning objective: Read and write the formal notation; understand firing rules and state equations.

Chapter 1 introduced Petri nets informally — circles, bars, arrows, dots. That’s enough for intuition, but not enough for analysis. To prove that a system can’t deadlock, or that a resource pool never goes negative, or that a workflow always terminates, you need precise definitions. This chapter provides them.

The notation looks heavy at first. It isn’t. Everything here follows from two ideas: a net is a bipartite graph with weights, and a marking is a vector of integers. The rest is matrix arithmetic.

The Formal Definition

A Petri net is a 5-tuple:

Where:

  • is a finite set of places
  • is a finite set of transitions
  • is the flow relation — the set of arcs
  • is the arc weight function
  • is the initial marking

Two constraints keep things clean: (nothing is both a place and a transition) and (the net isn’t empty).

That’s the entire definition. Five things. Places are where tokens live. Transitions are what moves them. Arcs say who connects to whom. Weights say how many tokens an arc carries. The initial marking says where tokens start.

Presets and Postsets

Every transition has inputs and outputs. The notation for these is compact and worth memorizing:

  • — the preset of (input places)
  • — the postset of (output places)

The same notation works for places:

  • — transitions that feed into
  • — transitions that feeds into

The bullet notation reads naturally: means “what comes before ,” and means “what comes after .”

Markings and the State Vector

A marking assigns a non-negative integer to each place — how many tokens it holds. Since a net has places, we can write the marking as a column vector:

This is the state vector. It captures everything about the system’s current state. Two markings are equal if and only if every place has the same token count in both. There’s no hidden state, no side channels, no context beyond the vector.

The initial marking is the starting state. For the traffic light from Chapter 1:

Every marking you can reach by firing transitions is a point in — non-negative integer coordinates, one per place. The set of all reachable markings, starting from , is the reachability set . For the traffic light, contains exactly three points: the token in Green, the token in Yellow, the token in Red.

Firing Rules and Reachability

A transition fires by consuming tokens from its input places and producing tokens in its output places. But it can only fire when it has enough tokens to consume. The enabling condition is:

Every input place must have at least as many tokens as the arc weight demands. If any input place is short, the transition is blocked.

When an enabled transition fires, the new marking is:

For every place : subtract the tokens consumed (arc from to ) and add the tokens produced (arc from to ). If isn’t connected to , both weights are zero and — unconnected places are unaffected.

A Worked Example

Consider a simple three-place chain:

[p1] -> t1 -> [p2] -> t2 -> [p3]

With initial marking (one token in ).

Step 1: Is enabled? Its only input is , and . Yes. Fire :

Step 2: Is enabled? Its only input is , and . Yes. Fire :

The token has moved from through to . The reachability set is . Three reachable markings, matching intuition — the token can be in exactly one of three places.

Firing Sequences

A firing sequence is an ordered list of transitions: . The sequence is valid from marking if each transition is enabled when it’s supposed to fire — is enabled at , is enabled at the marking after fires, and so on.

We write to mean “firing sequence is valid from and leads to .” The reachability set is then:

Nondeterminism

When multiple transitions are enabled simultaneously, either one could fire. The Petri net doesn’t prescribe which. This nondeterminism is a feature, not a bug — it models real concurrency where the order of independent events is genuinely unspecified.

In the traffic light, only one transition is ever enabled at a time, so there’s no nondeterminism. But in the intersection model from Chapter 1, when the shared token is available, both lights could claim it. The net structure constrains what’s possible; it doesn’t dictate the schedule.

The Incidence Matrix

Computing markings by hand gets tedious. The incidence matrix makes it mechanical.

The incidence matrix has one row per place and one column per transition. Each entry is the net effect of firing that transition on that place:

Positive means the transition produces tokens in that place. Negative means it consumes. Zero means no effect.

Building the Matrix

For the three-place chain:

Reading column : firing removes one token from and adds one to . Reading column : firing removes one from and adds one to . Each column is a change vector — the delta applied to the marking when that transition fires.

The State Equation

This is the payoff. With the incidence matrix, the firing rule becomes matrix multiplication:

Where is the firing count vector — how many times each transition fires. For a single firing of , :

Fire both transitions once () to go directly from to :

The state equation is powerful because it’s linear. Given any firing count vector, you can compute the resulting marking with one matrix multiplication. But there’s a caveat: the state equation doesn’t check enabling. A firing count vector might describe a sequence that isn’t valid — some transition might need tokens that aren’t there yet. The state equation gives you a necessary condition for reachability, not a sufficient one.

A Richer Example

Consider a net where a transition has two inputs — the synchronization pattern:

[p1] --> t1 --> [p3]
[p2] --/

Transition needs a token from both and to fire. The incidence matrix:

With , firing gives . With , is not enabled — has no token. The synchronization is enforced by the structure.

P-Invariants and Conservation Laws

The incidence matrix reveals something deeper than individual firings. It reveals what cannot change, no matter what sequence of transitions fires.

A P-invariant (place invariant) is a row vector such that:

The vector assigns a weight to each place such that the weighted sum of tokens is preserved by every transition. If is a P-invariant, then for any reachable marking :

This is a conservation law. No matter what fires, no matter what order, the weighted token count stays constant.

Finding P-Invariants

For the three-place chain, try :

It works. The invariant says:

The total number of tokens is always 1. We proved this for the traffic light informally in Chapter 1 — now we’ve proved it with linear algebra, and the proof works for any net where .

What Conservation Laws Tell You

P-invariants answer practical questions:

“Can tokens leak?” If is an invariant, the total token count is constant. The net is closed — nothing is created or destroyed. This is the Petri net equivalent of conservation of mass.

“Is this resource pool managed correctly?” If two places (available) and (busy) satisfy for some constant , then resources are always either available or busy, never duplicated or lost.

“Is the net bounded?” If every place appears in at least one P-invariant with a positive coefficient, then no place can accumulate unbounded tokens. The net is structurally bounded — it stays within limits regardless of the initial marking.

Partial Invariants

Not every net has a P-invariant covering all places. A net with a “source” transition (one that produces tokens from nothing) or a “sink” transition (one that consumes tokens into nothing) breaks total conservation. But subsets of places may still be conserved.

For example, in a producer-consumer system:

[source] -> t_produce -> [buffer] -> t_consume -> [sink]

The total token count isn’t conserved — creates tokens and destroys them. But if you add a capacity limit by looping tokens back:

[empty_slots: 5] -> t_produce -> [buffer] -> t_consume -> [empty_slots]

Now is an invariant. The buffer can never exceed 5.

In Code

The go-pflow library computes incidence matrices and checks invariants directly:

analyzer := reachability.NewInvariantAnalyzer(net)
matrix, places, transitions := analyzer.IncidenceMatrix()
invariants := analyzer.FindPInvariants(initialMarking)

for _, inv := range invariants {
    fmt.Printf("Conserved: %v = %d\n", inv.Coefficients, inv.Value)
}

// Check if total tokens are conserved
isConservative := analyzer.CheckConservation(initialMarking)

The IncidenceMatrix method builds the matrix from the net’s arc structure. FindPInvariants solves to find conservation laws. CheckConservation tests whether the all-ones vector is an invariant — whether total token count is preserved.

Putting It Together

The mathematics of this chapter — the 5-tuple, the state equation, the incidence matrix, P-invariants — form a complete toolkit for structural analysis. Given a Petri net, you can:

  1. Write the incidence matrix from the arc structure
  2. Compute reachable markings using the state equation
  3. Find conservation laws by solving
  4. Prove boundedness from positive P-invariants
  5. Check for deadlocks by finding markings where no transition is enabled

All of this is static analysis — you don’t run the net, you reason about its structure. This is what distinguishes Petri nets from simulation frameworks: the topology itself is a proof.

But there’s a limit. Discrete Petri nets track individual tokens and individual firings. For large systems — thousands of tokens, complex topologies — the reachability set can be enormous. The next chapter shows how to break through this limit by letting tokens be real numbers and transitions fire continuously, turning the state equation into a differential equation.

From Discrete to Continuous

Learning objective: Convert a Petri net into an ODE system and understand why this is powerful.

The previous chapter gave you a toolkit for analyzing Petri nets: incidence matrices, state equations, P-invariants. All of that works by tracking integer tokens through discrete firing events. This chapter shows what happens when you let go of the integers — when tokens become real-valued concentrations and transitions fire continuously at rates determined by the net’s structure. The state equation becomes a differential equation, and the entire apparatus of calculus becomes available.

Why Discrete Event Simulation Hits Walls

Discrete Petri net simulation is straightforward. Pick an enabled transition. Fire it. Update the marking. Repeat.

For small nets this works fine. But as systems grow, three problems compound:

Event volume scales with population. A hospital ER with 10 patients might generate 50 events (arrivals, triages, consultations, discharges). The same ER with 10,000 patients generates 500,000 events. Each event must be processed sequentially — check which transitions are enabled, choose one, fire it, update the marking, record the event. The cost scales linearly with the number of tokens moving through the system.

State space explodes. The reachability set grows combinatorially with the number of tokens and places. A net with 10 places and 100 total tokens can have astronomical numbers of reachable markings. Enumerating them becomes infeasible long before the model is “large” by real-world standards.

Individual identity is usually irrelevant. Discrete simulation tracks which specific token moved where. But for capacity planning, bottleneck analysis, or throughput estimation, you don’t care about Patient 47. You care about how many patients are waiting, on average, and how fast the queue drains. Tracking individuals is expensive bookkeeping for information you don’t need.

The continuous approach dissolves all three problems. Tokens become real numbers — concentrations, not counts. Transitions fire continuously at rates determined by the current state. The evolution of the system is described by a set of ordinary differential equations (ODEs) that can be solved in milliseconds regardless of the population size.

Mass-Action Kinetics

The bridge from discrete to continuous comes from chemistry, where it’s called mass-action kinetics: the rate of a reaction is proportional to the product of the concentrations of its reactants.

Consider a transition with rate constant and two input places and :

[p1] --> t[k] --> [p3]
[p2] --/

In the discrete world, fires when both and have at least one token. In the continuous world, fires at a rate that depends on how many tokens are in its inputs:

More tokens in the input places means a higher firing rate. Zero tokens in any input place means zero rate — the transition stalls. This is the continuous analog of the enabling condition: instead of a binary enabled/disabled, you get a smooth rate that goes to zero as inputs deplete.

The General Rate Law

For a transition with rate constant , input places , and arc weights :

The firing rate is the rate constant times the product of input markings, each raised to the power of its arc weight. An arc weight of 1 gives linear dependence (the common case). An arc weight of 2 gives quadratic dependence — the rate grows with the square of the concentration, modeling reactions that require two units of the same resource.

Why This Works

Mass-action kinetics isn’t just a convenient formula. It’s the natural rate law when transitions fire at random with probability proportional to the availability of inputs. If a transition needs one token from and one from , and tokens are selected at random, the probability of a successful firing scales as .

Chemistry discovered this in the 19th century. The same logic applies to any system where events happen stochastically and depend on resource availability: customers joining queues, orders consuming inventory, players taking turns.

The Continuous Relaxation

With mass-action rates defined, we can write the ODE system for any Petri net. Recall the incidence matrix from Chapter 2, where is the net change in place when transition fires. Collect all transition rates into a vector:

The continuous state equation is:

This says: the rate of change of the marking is the incidence matrix times the rate vector. Each place’s token count changes by the weighted sum of all transition rates affecting it. Tokens flow in from transitions that produce, and flow out to transitions that consume.

Example: Three-Place Chain

The simplest non-trivial example:

[p1] -> t1[k1] -> [p2] -> t2[k2] -> [p3]

Incidence matrix and rate vector:

The ODE system:

Starting from with and :

ODE trajectories for the three-place chain

  • Tokens drain from exponentially:
  • Tokens build up in , then drain to
  • Eventually all tokens accumulate in

The conservation law still holds — for all time — because . Tokens aren’t created or destroyed, just redistributed continuously.

What Changes, What Stays

The continuous relaxation changes the execution model (real-valued tokens, rate-based firing) but preserves the structural properties:

  • Conservation laws still hold. If in the discrete case, the same identity holds in the continuous case, and the weighted token sum is constant for all time.
  • The incidence matrix is identical. The structure of the net doesn’t change.
  • Boundedness from P-invariants still applies.

What you gain is speed and scalability. Solving the ODE system takes the same time whether you start with 10 tokens or 10,000. And the solution gives you a smooth trajectory — the entire evolution of the system, not just a sequence of discrete snapshots.

What you lose is individual identity. Token 47 no longer exists. You have 4.3 tokens in — a concentration, an average, a continuous quantity. For most modeling purposes, this is exactly the right trade-off.

Numerical Integration

Most ODE systems can’t be solved analytically. Even the three-place chain above has a closed-form solution only because it’s linear. Add a synchronization transition (two inputs) and the system becomes nonlinear — the rate depends on the product of two state variables. Analytical solutions vanish.

Instead, we solve numerically: start at the initial marking, compute the derivatives, step forward in time, repeat.

Euler’s Method (and Why It’s Not Enough)

The simplest numerical method is Euler’s method:

Where is the timestep and is the derivative function. Compute the slope, take a step in that direction.

Euler’s method works but is inaccurate. The error grows as — to halve the error, you must halve the timestep and double the work. For Petri net simulations with rates spanning several orders of magnitude, tiny timesteps are needed, making Euler impractically slow.

Runge-Kutta Methods

The standard remedy is to evaluate the derivative at multiple points within each step and combine them for a better approximation. These are Runge-Kutta methods, and they’re classified by their order — the power of in the error term.

A 4th-order method (RK4) has error per step. A 5th-order method has error . Higher order means fewer steps for the same accuracy.

Tsit5: What go-pflow Uses

The default solver in go-pflow is Tsit5 — a 5th-order Runge-Kutta method designed by Charalampos Tsitouras in 2011. It uses 7 function evaluations per step to compute both a 5th-order solution and a 4th-order solution. The difference between them estimates the error.

This error estimate enables adaptive timestep control:

  1. Compute both the 5th-order and 4th-order solutions
  2. Estimate the error:
  3. If : accept the step, try a larger step next time
  4. If : reject the step, retry with a smaller step

The solver automatically takes large steps where the solution is smooth and small steps where it changes rapidly. You specify the error tolerance, not the timestep. This is why the same solver handles both a slow-draining reservoir and rapid chemical kinetics without manual tuning.

In code:

net, rates := petri.Build().
    Place("A", 10).Place("B", 0).Place("C", 0).
    Transition("t1").Transition("t2").
    Arc("A", "t1", 1).Arc("t1", "B", 1).
    Arc("B", "t2", 1).Arc("t2", "C", 1).
    WithRates(1.0)

prob := solver.NewProblem(net, net.SetState(nil), [2]float64{0, 50}, rates)
sol := solver.Solve(prob, solver.Tsit5(), solver.DefaultOptions())

final := sol.GetFinalState()
// final["A"] ~= 0.0, final["C"] ~= 10.0

The Tsit5() solver, DefaultOptions() tolerance settings, and the mass-action kinetics built into NewProblem handle everything. The user specifies the net, the rates, and the time span. The solver does the rest.

Other Solvers

go-pflow provides several Runge-Kutta methods for different use cases:

SolverOrderUse Case
Tsit5()5(4)Default — best balance of speed and accuracy
RK45()5(4)Dormand-Prince — the classic MATLAB ode45
BS32()3(2)Bogacki-Shampine — faster per step, less accurate
RK4()4Classic fixed-step — no adaptive control
Euler()1Forward Euler — educational only

For most Petri net simulations, Tsit5 is the right choice. RK45 produces nearly identical results. The lower-order methods exist for comparison and for systems where function evaluations are extremely expensive.

Solver Presets

Rather than manually tuning tolerances, go-pflow provides presets for common scenarios:

solver.DefaultOptions()    // General purpose (Dt=0.1, Reltol=1e-6)
solver.FastOptions()       // Game AI, interactive (Dt=0.5, Reltol=1e-3)
solver.AccurateOptions()   // Research, publishing (Dt=0.001, Reltol=1e-9)
solver.JSParityOptions()   // Match pflow.xyz JS solver exactly

The FastOptions preset trades precision for speed — about 10x faster, suitable for game AI where you need many evaluations per second. AccurateOptions tightens tolerances for publication-quality results.

Equilibrium Detection

Many systems settle into a steady state where nothing changes anymore. The ODE solver can detect this automatically.

Equilibrium means:

All derivatives are zero. Tokens still exist, but the flow in equals the flow out at every place. The system has reached a balance.

For the three-place chain, equilibrium is — all tokens have moved to and no transitions can fire (both input places are empty, so both rates are zero). This is a terminal equilibrium — the system has stopped.

Other nets reach dynamic equilibrium where transitions still fire but the marking doesn’t change. A net with a cycle:

[p1] -> t1[k1] -> [p2] -> t2[k2] -> [p1]

reaches equilibrium when — the flow from to equals the flow back. The tokens are still moving, but the concentrations are stable.

Why Equilibrium Matters

Equilibrium tells you the long-run behavior of the system:

Capacity planning. A hospital ER at equilibrium has stable queue lengths. Those lengths tell you whether you have enough doctors, whether patients wait too long, whether the system is overloaded.

Bottleneck identification. At equilibrium, tokens accumulate before the slowest transition. A place with a high equilibrium concentration is a bottleneck — work piles up there because the downstream transition can’t keep up.

Optimization. The equilibrium value of an objective function (profit, throughput, win probability) tells you how good a configuration is. Change a rate constant, re-solve to equilibrium, compare.

In Code

go-pflow can solve until equilibrium is detected, stopping early instead of integrating to an arbitrary end time:

prob := solver.NewProblem(net, initialState, [2]float64{0, 1000}, rates)

sol, result := solver.SolveUntilEquilibrium(
    prob,
    solver.Tsit5(),
    solver.DefaultOptions(),
    solver.DefaultEquilibriumOptions(),
)

if result.Reached {
    fmt.Printf("Equilibrium at t=%.2f\n", result.Time)
    fmt.Printf("Final state: %v\n", result.State)
    fmt.Printf("Max rate of change: %e\n", result.MaxChange)
}

The equilibrium detector monitors the maximum absolute derivative across all places. When it stays below a tolerance (default ) for several consecutive steps, the solver declares equilibrium and stops. Presets control the sensitivity:

solver.DefaultEquilibriumOptions()  // Tolerance 1e-6, 5 consecutive steps
solver.FastEquilibriumOptions()     // Tolerance 1e-4, 3 steps (game AI)
solver.StrictEquilibriumOptions()   // Tolerance 1e-9, 10 steps (research)

The Full Picture

The continuous relaxation transforms the Petri net from a discrete-event system into a dynamical system. The incidence matrix becomes the stoichiometry matrix. The firing rule becomes mass-action kinetics. The state equation becomes a differential equation.

This transformation is faithful — conservation laws, structural bounds, and topology carry over unchanged. But the continuous form is dramatically more efficient to simulate and opens the door to equilibrium analysis, sensitivity analysis, and gradient-based optimization.

The pattern for using continuous Petri nets is:

  1. Build the net — places, transitions, arcs (same as discrete)
  2. Assign rates — a rate constant for each transition
  3. Solve the ODE — from initial marking to final time or equilibrium
  4. Read the results — token trajectories, equilibrium values, conservation checks

The remaining chapters in this book use this pattern extensively. The coffee shop in Chapter 5 uses equilibrium analysis for capacity planning. The knapsack problem in Chapter 8 uses continuous relaxation to approximate discrete optimization. The enzyme kinetics in Chapter 9 recover classical biochemistry equations directly from the net’s structure.

Every time, the steps are the same: define the net, set the rates, solve, interpret. The mathematics is always . The solver handles the numerics. You focus on the model.

The Token Language

Learning objective: Use the pflow DSL to define nets with guards, roles, and typed arcs.

The first three chapters built Petri nets from mathematical primitives — places, transitions, arcs, tokens, rate constants. That’s sufficient for analysis but awkward for specification. You wouldn’t write a web application by specifying its state machine as a 5-tuple. You’d want a language.

This chapter introduces the token language — a small, interpreted DSL for defining Petri net models with rich data semantics. It extends the mathematical foundation with guards, typed state, invariants, and categorical net types, while keeping the vocabulary to exactly four terms.

The Four-Term Primitive

The token language distills to four primitives:

TermMaps ToPurpose
cellState / PlaceContainers for tokens or data
funcAction / TransitionOperations that transform state
arrowArcFlow connections with optional bindings
guardPredicateBoolean conditions gating actions

Every token model — from an ERC-20 token standard to a poker game to a hospital workflow — is expressed with these four terms. cell defines what exists. func defines what happens. arrow defines what flows where. guard defines what’s allowed.

This isn’t minimalism for its own sake. Four terms is the smallest vocabulary that captures the essential structure of executable schemas. Fewer terms would lose expressiveness. More terms would add redundancy. The four-term primitive sits at the same sweet spot as Petri nets themselves: just enough structure to be useful, not so much that it obscures the model.

S-Expression Syntax

The DSL uses an S-expression syntax. Here’s a complete token model for a simplified ERC-20 token:

(schema ERC-20
  (version v1.0.0)

  (state totalSupply :type uint256)
  (state balances :type map[address]uint256 :exported)
  (state allowances :type map[address]map[address]uint256)

  (action transfer :guard {balances[from] >= amount && to != address(0)})
  (action approve)
  (action mint :guard {to != address(0)})
  (action burn :guard {balances[from] >= amount})

  (arc balances -> transfer :keys (from))
  (arc transfer -> balances :keys (to))
  (arc mint -> totalSupply)
  (arc totalSupply -> burn)

  (constraint conservation {sum(balances) == totalSupply}))

Each form declares structure:

  • state declares a cell with a name and type. The :exported keyword marks states that are visible across schema boundaries.
  • action declares a func, optionally with a guard expression in braces.
  • arc declares an arrow between a state and an action (or vice versa). The -> is syntactic sugar for the direction. :keys specifies map access paths for data states.
  • constraint declares an invariant — a property that must hold after every state transition.

The syntax is deliberately simple. No variable declarations, no control flow, no modules. The structure of the model is visible in the shape of the text.

In Go

The same model can be constructed with a fluent builder API:

schema := dsl.Build("ERC-20").
    Version("v1.0.0").
    Data("totalSupply", "uint256").
    Data("balances", "map[address]uint256").Exported().
    Data("allowances", "map[address]map[address]uint256").
    Action("transfer").Guard("balances[from] >= amount && to != address(0)").
    Action("approve").
    Action("mint").Guard("to != address(0)").
    Action("burn").Guard("balances[from] >= amount").
    Flow("balances", "transfer").Keys("from").
    Flow("transfer", "balances").Keys("to").
    Flow("mint", "totalSupply").
    Flow("totalSupply", "burn").
    Constraint("conservation", "sum(balances) == totalSupply").
    MustSchema()

Both representations — S-expression and builder — produce the same executable schema. The S-expression is better for reading and sharing. The builder is better for programmatic construction and code generation.

Guards and Conditional Firing

In a basic Petri net, a transition fires whenever its input places have enough tokens. Guards add a second condition: even if tokens are available, the transition fires only when its guard expression evaluates to true.

Expression Syntax

Guards support a familiar expression language:

Comparison:    balances[from] >= amount
               to != address(0)

Logical:       condition1 && condition2
               condition1 || condition2
               !condition

Indexing:      balances[key]
               allowances[owner][spender]

Field access:  schedule.revocable
               data.timestamp

Functions:     sum(balances)
               count(owners)
               address(0)

The evaluator uses short-circuit evaluation — && stops on the first false, || stops on the first true — so guards with early-exit conditions are efficient.

Guard vs. Invariant

Both guards and invariants are boolean expressions, but they serve different purposes and fire at different times:

AspectGuardInvariant
When checkedBefore action firesAfter state update
On failureAction is blockedTransaction rolls back
PurposePreconditionSystem property
ScopeSingle actionEntire schema

A guard asks: “Is this action allowed right now?” An invariant asks: “Did this action break anything?”

For the ERC-20 transfer, the guard balances[from] >= amount prevents overdrafts. The invariant sum(balances) == totalSupply catches any bug that would create or destroy tokens. The guard is the first line of defense; the invariant is the safety net.

Evaluation Context

Guards evaluate against a context containing three things:

  1. Bindings — variable values from the action’s parameters (from, to, amount)
  2. Functions — built-in aggregates and custom functions (sum, count, address)
  3. State — the current snapshot for data access

When the guard balances[from] >= amount runs, from is resolved from the bindings, balances is resolved from the current state, and >= performs the comparison. Missing map keys default to zero — accessing balances[nonexistent] returns 0 rather than an error.

Weighted Arcs and Inhibitor Arcs

Basic arcs have an implicit weight of 1 — one token consumed, one token produced. Weighted arcs generalize this.

Weighted Arcs

An arc with weight from a place to a transition means “this transition consumes tokens from this place when it fires.” An arc with weight from a transition to a place means “this transition produces tokens in this place when it fires.”

In the token language, arc weights appear in the schema definition. A coffee recipe that requires 2 units of water and 1 unit of grounds:

(state water :kind token :initial 10)
(state grounds :kind token :initial 5)
(state coffee :kind token :initial 0)

(action brew)

(arc water -> brew :value 2)
(arc grounds -> brew :value 1)
(arc brew -> coffee :value 1)

The brew action consumes 2 water and 1 grounds, producing 1 coffee. The enabling condition requires and .

Inhibitor Arcs

An inhibitor arc is the opposite of a normal arc: it prevents a transition from firing when the source place has tokens, rather than when it doesn’t. Drawn as an arc with a small circle instead of an arrowhead, an inhibitor arc from place to transition means “ can only fire when is empty.”

Inhibitor arcs add expressiveness beyond standard Petri nets. They can test for zero — something that basic Petri nets cannot do. This makes inhibitor nets Turing-complete, at the cost of losing some decidability results.

In practice, inhibitor arcs model capacity limits and mutual exclusion. A buffer with maximum capacity 5:

(state buffer :kind token :initial 0)
(state empty_slots :kind token :initial 5)

(action produce)
(action consume)

(arc empty_slots -> produce :value 1)
(arc produce -> buffer :value 1)
(arc buffer -> consume :value 1)
(arc consume -> empty_slots :value 1)

This uses a complementary place (empty_slots) rather than an inhibitor arc — a common pattern that achieves the same effect while staying within the standard Petri net formalism. The P-invariant is maintained by the net’s structure.

Tokens vs. Data

The token language distinguishes two kinds of state, reflecting two fundamentally different ways that places hold information.

TokenState

Integer counters representing discrete quantities. This is classic Petri net semantics:

  • Firing consumes and produces tokens
  • The enabling condition checks token counts
  • Tokens are fungible — we track counts, not identities

A place with kind: token behaves exactly like a mathematical Petri net place. Tokens are integers. Arcs carry weights. Conservation laws hold.

DataState

Typed containers holding structured data — maps, records, scalar values. This extends Petri nets to handle real-world state:

  • Arc keys specify access paths (balances[from])
  • Data arcs read and write; they don’t consume
  • Types are explicit (map[address]uint256, bool, string)

A place with kind: data holds a value, not a count. The balances place in the ERC-20 schema holds a map from addresses to amounts. The transfer action doesn’t consume the map — it reads from one key and writes to another.

The combination of TokenState and DataState lets you model systems where both control flow (tokens tracking “where we are”) and data transformations (balances, permissions, configuration) matter. A workflow token tracks the order through pending → confirmed → shipped. Data states track the order’s contents, the customer’s address, the payment amount.

Categorical Net Types

Not all Petri nets are alike. An order-processing workflow and an epidemiological simulation both use places, transitions, and arcs, but their tokens mean fundamentally different things. Net types classify Petri nets by how their tokens behave, what invariants hold, and how they compose.

The Five Types

TypeToken SemanticsInvariant
WorkflowNetSingle control-flow token (cursor)Mutual exclusion — cursor in exactly one state
ResourceNetCountable inventory tokensConservation — total tokens constant
GameNetTurn-based tokens with player rolesBoth mutual exclusion and conservation
ComputationNetContinuous quantities via ODE ratesRate conservation at steady state
ClassificationNetSignal tokens with threshold activationThreshold firing conditions

WorkflowNet. A single token traces a path through states. An order moves through pending → confirmed → shipped → delivered. The token is a cursor — it marks where you are in a sequential process. WorkflowNets are finite state machines with the structural advantage that parallelism (multiple concurrent tokens) is expressible.

ResourceNet. Tokens represent countable, fungible things: inventory items, currency, capacity slots. A warehouse has 100 tokens in available, consumed by reserve and produced into reserved. The total is conserved — the P-invariant guarantees that nothing is created or destroyed.

GameNet. Tokens encode both game state and turn structure. In tic-tac-toe, empty cells hold tokens consumed when a player moves. A turn-control token alternates between players. GameNets combine WorkflowNet sequencing (turns) with ResourceNet accounting (board positions).

ComputationNet. Tokens are continuous, not discrete. Transitions fire at rates — mass-action kinetics from Chapter 3. An SIR epidemic model is a ComputationNet: the infection rate depends on the product of susceptible and infected populations.

ClassificationNet. Tokens accumulate as evidence. Transitions fire when enough tokens accrue — a threshold. A spam filter accumulates signal tokens from heuristics (suspicious sender, keyword matches, link density) and fires classify_spam when the threshold is met.

Why Types Matter

At the structural level, all five types are Petri nets. The type annotations matter because they constrain composition. When you connect two nets, the types determine what connections are valid:

  • An EventLink connects transitions to transitions across schemas — when one fires, the other fires too
  • A DataLink connects places to places — read-only observation across schema boundaries
  • A TokenLink transfers tokens between schemas — resource coupling with cross-boundary conservation
  • A GuardLink gates a transition in one schema on a place in another — constraint coupling

Types make the composition rules explicit. You can’t accidentally link a workflow cursor to an inventory counter. The type system prevents semantic nonsense at the structural level.

Composition

Real systems are compositions of multiple nets. An order system combines a WorkflowNet (order lifecycle) with a ResourceNet (inventory), linked by EventLinks:

Orders (WorkflowNet):    pending -> confirm -> confirmed -> ship -> shipped
Inventory (ResourceNet): available -> reserve -> reserved -> ship_out -> consumed

EventLinks:

  1. orders.confirm → inventory.reserve — confirming reserves stock
  2. orders.ship → inventory.ship_out — shipping consumes reserved stock

Each schema carries its own verified properties. The Orders WorkflowNet guarantees mutual exclusion (the order is in exactly one state). The Inventory ResourceNet guarantees conservation (total stock is constant). These properties hold independently — adding links doesn’t invalidate either guarantee.

This is assume-guarantee reasoning: each component is verified in isolation, and composition only needs to check the boundaries. The algebraic properties of composition — associativity, commutativity, monotonicity — ensure that building systems incrementally is safe. Adding a new schema or link can only extend behavior, never break what’s already working.

The Execution Pipeline

From source text to running model, the token language follows a classic interpreter pipeline:

  1. Lexer — tokenizes source into LPAREN, RPAREN, SYMBOL, GUARD, KEYWORD, ARROW
  2. Parser — builds an AST with SchemaNode, StateNode, ActionNode, ArcNode, ConstraintNode
  3. Interpreter — converts the AST to an executable Schema with validated references
  4. Runtime — executes actions: guard check → input processing → output processing → invariant check → commit

Each action execution follows a strict sequence:

  1. Guard check: evaluate the guard expression with current bindings and state
  2. Input processing: consume tokens from input arcs, bind data from data arcs
  3. Output processing: produce tokens at output arcs, update data at data arcs
  4. Invariant check: verify all constraints still hold
  5. Commit: update the snapshot with an incremented sequence number

If the guard fails, the action is blocked — nothing changes. If an invariant fails after the state update, the entire transaction rolls back. The sequence number provides an optimistic concurrency control mechanism: clients can detect stale state by comparing sequence numbers.

schema, err := dsl.ParseSchema(`
  (schema Counter
    (state count :kind token :initial 10)
    (action decrement :guard {tokens(count) > 0})
    (arc count -> decrement)
    (arc decrement -> done))
`)

The ParseSchema function runs the full pipeline — lexing, parsing, interpretation, and validation — returning an executable schema or an error.

Putting It Together

The token language adds three layers on top of the mathematical Petri net:

  1. Guards and invariants — conditional firing and conservation enforcement
  2. Typed state — TokenState for discrete counts, DataState for structured values
  3. Net types and composition — categorical classification with typed links

These layers don’t replace the mathematics. They organize it. The incidence matrix, state equation, and P-invariants from Chapters 2 and 3 still apply. Guards add preconditions. Invariants add postconditions. Types add composition rules.

The four-term vocabulary — cell, func, arrow, guard — is the interface between human intent and the mathematical machinery. You describe the model in these terms. The machinery handles the rest.

Part II of this book uses the token language extensively. Each application chapter defines its model in this DSL, simulates it with the ODE solver from Chapter 3, and analyzes it with the structural tools from Chapter 2. The language is the common thread — the same four terms, applied to coffee shops, game boards, constraint puzzles, biochemical pathways, and poker tables.

Resource Modeling — The Coffee Shop

Learning objective: Model inventory, throughput, and bottlenecks as token flow.

The first four chapters built the machinery: bipartite graphs, incidence matrices, ODEs, a DSL. Now we use it. This chapter models a coffee shop — ingredients, recipes, orders, capacity limits — as a Petri net. The model answers operational questions: When do we run out of beans? Which drink creates the worst bottleneck? How should we staff during peak hours?

The coffee shop is a ResourceNet in the categorical vocabulary from Chapter 4. Tokens represent fungible quantities — grams of beans, milliliters of milk, cups. Conservation laws guarantee that nothing is created or destroyed without an explicit transition. The continuous relaxation from Chapter 3 turns the model into a capacity planning tool.

The Problem

A coffee shop has limited inventory. Different drinks consume ingredients at different rates:

DrinkBeansWaterMilkCups
Espresso18g30ml1
Americano18g200ml1
Latte18g30ml180ml1
Cappuccino18g30ml120ml1
Mocha18g30ml150ml1

Starting inventory: 1,000g beans, 5,000ml milk, 10,000ml water, 100 cups.

The questions are straightforward. Which ingredient runs out first? How many drinks can we serve before something depletes? If we increase latte production, what’s the knock-on effect on bean consumption?

You could answer these with a spreadsheet. But a spreadsheet gives you static arithmetic — multiply rates by time. A Petri net gives you dynamics — rates that change as resources deplete, multiple transitions competing for the same inputs, bottlenecks emerging from structure rather than being hard-coded.

Defining the Net

The coffee shop net has three categories of places:

Ingredient places hold the current stock. Each starts with an initial token count representing the inventory on hand:

coffee_beans: 1000    (grams)
milk:         5000    (milliliters)
water:        10000   (milliliters)
cups:         100
syrup:        500     (pumps)

Consumption tracking places count what’s been used. They start at zero and accumulate as drinks are made:

beans_used:  0
milk_used:   0
water_used:  0
cups_used:   0
syrup_used:  0

Transitions represent drink preparation — one per drink type:

make_espresso
make_americano
make_latte
make_cappuccino
make_mocha

Coffee shop Petri net — ingredients flow through recipes into tracking places

The bipartite structure is clean: ingredient places connect to transitions (input arcs), and transitions connect to tracking places (output arcs). No place connects to another place. No transition connects to another transition. The Petri net enforces this separation by construction.

In Code

net := petri.NewPetriNet()

// Ingredient places
net.AddPlace("coffee_beans", 1000, nil, 100, 100, nil)
net.AddPlace("milk", 5000, nil, 100, 200, nil)
net.AddPlace("water", 10000, nil, 100, 300, nil)
net.AddPlace("cups", 100, nil, 100, 400, nil)
net.AddPlace("syrup", 500, nil, 100, 600, nil)

// Tracking places
net.AddPlace("beans_used", 0, nil, 300, 100, nil)
net.AddPlace("milk_used", 0, nil, 300, 200, nil)
net.AddPlace("water_used", 0, nil, 300, 300, nil)
net.AddPlace("cups_used", 0, nil, 300, 400, nil)
net.AddPlace("syrup_used", 0, nil, 300, 600, nil)

Each place gets an initial token count. The tracking places start at zero — they’re sinks that accumulate evidence of consumption. Together with the ingredient places, they form a conservation system: every bean consumed from coffee_beans appears in beans_used.

Weighted Arcs as Recipes

This is where the model gets interesting. Each drink requires specific quantities of ingredients. The arc weights encode these recipes directly in the net’s structure.

For espresso — 18g beans, 30ml water, 1 cup:

net.AddTransition("make_espresso", "consume", 200, 150, nil)

// Input arcs (consumption)
net.AddArc("coffee_beans", "make_espresso", 18, false)
net.AddArc("water", "make_espresso", 30, false)
net.AddArc("cups", "make_espresso", 1, false)

// Output arcs (tracking)
net.AddArc("make_espresso", "beans_used", 18, false)
net.AddArc("make_espresso", "water_used", 30, false)
net.AddArc("make_espresso", "cups_used", 1, false)

For a latte — 18g beans, 30ml water, 180ml milk, 1 cup:

net.AddTransition("make_latte", "consume", 200, 350, nil)

net.AddArc("coffee_beans", "make_latte", 18, false)
net.AddArc("water", "make_latte", 30, false)
net.AddArc("milk", "make_latte", 180, false)
net.AddArc("cups", "make_latte", 1, false)

net.AddArc("make_latte", "beans_used", 18, false)
net.AddArc("make_latte", "water_used", 30, false)
net.AddArc("make_latte", "milk_used", 180, false)
net.AddArc("make_latte", "cups_used", 1, false)

The arc weight from milk to make_latte is 180. This means make_latte is enabled only when . Each firing consumes 180 tokens from milk and produces 180 tokens in milk_used. The recipe is the arc structure — no separate configuration needed.

The Incidence Matrix

The incidence matrix for the full model captures every recipe in a single data structure. Each column is a drink type. Each row is an ingredient (or tracking place). Reading column make_latte:

Negative entries are consumption. Positive entries in the tracking rows mirror them:

Conservation Laws

The paired structure — every token consumed from an ingredient place appears in the corresponding tracking place — creates P-invariants. For coffee beans:

For milk:

These hold for all time, under any firing sequence. The conservation law is structural — it follows from the arc weights summing to zero across each ingredient/tracking pair. No beans are created or destroyed. They’re consumed from stock into the “used” counter.

This is the Petri net equivalent of double-entry bookkeeping. Every debit (consumption) has a matching credit (tracking). The invariant is the balance equation.

ODE Simulation for Capacity Planning

With the net defined, the continuous relaxation from Chapter 3 turns it into a dynamical system. Each transition gets a rate constant representing drinks per minute:

rates := map[string]float64{
    "make_espresso":   0.5,  // 30/hr
    "make_americano":  0.3,  // 18/hr
    "make_latte":      0.8,  // 48/hr — most popular
    "make_cappuccino": 0.4,  // 24/hr
    "make_mocha":      0.2,  // 12/hr
}

The firing rate for each transition follows mass-action kinetics:

Wait — that looks wrong. With arc weight 18, the rate would depend on the 18th power of the bean count? That’s the literal mass-action formula, but with large token counts (1,000 beans) and large exponents (18), the numbers explode to astronomical values.

Scaling for Stability

This is a real issue in practice. The mass-action rate law was designed for chemical reactions where concentrations are small numbers (moles per liter, typically between 0 and 10). Coffee shop inventory has large integer token counts and large arc weights.

The solution is scaling. Divide the rate constants by a factor that brings the dynamics into a numerically stable range:

scaledRates := make(map[string]float64)
for k, v := range rates {
    scaledRates[k] = v * 0.0001
}

prob := solver.NewProblem(net, initialState, [2]float64{0, duration}, scaledRates)
opts := solver.DefaultOptions()
opts.Dt = 0.001

sol, eqResult := solver.SolveUntilEquilibrium(
    prob, solver.Tsit5(), opts, nil,
)

The scaling doesn’t change the qualitative behavior — ratios between transition rates are preserved, so relative throughput stays the same. It just keeps the numbers in a range where the ODE solver’s adaptive stepping works well.

What the Simulation Shows

Running the simulation from the initial inventory reveals depletion trajectories:

  • Cups deplete first — only 100 available, consumed one per drink regardless of type
  • Coffee beans deplete next — every drink uses 18g, so 1,000g supports about 55 drinks total
  • Milk depletes on a schedule driven by latte popularity — at 180ml per latte and 48 lattes/hour, 5,000ml lasts roughly 58 minutes of pure latte production
  • Water depletes slowest — 10,000ml with most drinks using only 30ml

The conservation laws hold throughout the simulation. At any point in time:

This is verifiable from the simulation output and provides a built-in sanity check.

Predicting Runout

Rather than running the full ODE, we can estimate depletion times analytically from the rates and arc weights:

consumptionRates := map[string]float64{
    "coffee_beans": 18 * (rates["make_espresso"] +
                          rates["make_americano"] +
                          rates["make_latte"] +
                          rates["make_cappuccino"] +
                          rates["make_mocha"]),
    "milk": 180*rates["make_latte"] +
            120*rates["make_cappuccino"] +
            150*rates["make_mocha"],
    "cups": rates["make_espresso"] +
            rates["make_americano"] +
            rates["make_latte"] +
            rates["make_cappuccino"] +
            rates["make_mocha"],
}

for ingredient, rate := range consumptionRates {
    runoutTime := currentStock[ingredient] / rate
    fmt.Printf("%s depletes in %.0f minutes\n", ingredient, runoutTime)
}

The analytical estimate assumes constant rates — it’s the linear approximation. The ODE simulation is more accurate because rates drop as ingredients deplete (mass-action slows transitions as inputs approach zero). The two converge when resources are abundant and diverge near depletion.

Bottleneck Analysis from Equilibrium

The real power of the continuous model appears at equilibrium. When the ODE reaches steady state (), the marking tells you where tokens accumulate — and that’s where your bottlenecks are.

In the coffee shop, equilibrium means all ingredients have depleted and all tracking places have filled. But the path to equilibrium reveals the bottleneck sequence:

  1. First bottleneck: Cups run out (100 cups, all drinks need exactly 1)
  2. Second bottleneck: Beans deplete (all drinks need 18g, no differentiation)
  3. Third bottleneck: Milk depletes (only milk-based drinks affected)

Scenario Analysis

Change the rates to model different days:

Slow day — halve all rates:

rates["make_latte"] = 0.4  // 24/hr instead of 48

Nothing depletes during a normal shift. The equilibrium is far in the future.

Rush hour — double all rates:

rates["make_latte"] = 1.6  // 96/hr

Cups deplete in half the time. Beans follow quickly. The queue builds up before production can handle it.

Latte promotion — triple latte rate, keep others constant:

rates["make_latte"] = 2.4  // 144/hr

Milk becomes the first bottleneck. With 5,000ml at 180ml/latte, the milk lasts about 35 minutes at this rate. Beans actually last longer because the increased latte volume comes at the expense of other drinks, which are rate-limited by their own (unchanged) constants.

Each scenario is the same net with different rate constants. The structure — places, arcs, weights — stays the same. Only the dynamics change.

Sensitivity Analysis

go-pflow includes sensitivity analysis that systematically evaluates which rate changes have the biggest impact:

scorer := sensitivity.FinalStateScorer(func(final map[string]float64) float64 {
    return final["cups_used"]  // maximize drinks served
})

analyzer := sensitivity.NewAnalyzer(net, state, rates, scorer).
    WithTimeSpan(0, 60)

result := analyzer.AnalyzeRatesParallel()

The analyzer perturbs each rate individually, simulates the ODE, and ranks transitions by their impact on the scoring function. For the coffee shop, the ranking typically shows:

  1. make_latte — highest positive impact (most popular drink)
  2. make_espresso — second highest (fast to make, less resource-intensive)
  3. make_cappuccino — moderate impact
  4. make_mocha — lowest (complex recipe, slowest rate)

This ranking tells the shop owner where to invest. Adding a second espresso machine (doubling the espresso rate) has more impact on throughput than hiring a second barista for mocha preparation.

Restocking and Capacity Limits

A real coffee shop doesn’t run until depletion — it restocks. The Petri net models this with refill transitions that move tokens from supply places into ingredient places:

// Supply places (incoming inventory)
net.AddPlace("beans_supply", 0, nil, 500, 100, nil)
net.AddPlace("milk_supply", 0, nil, 500, 200, nil)

// Refill transitions
net.AddTransition("refill_beans", "refill", 400, 100, nil)
net.AddArc("beans_supply", "refill_beans", 500, false)
net.AddArc("refill_beans", "coffee_beans", 500, false)

net.AddTransition("refill_milk", "refill", 400, 200, nil)
net.AddArc("milk_supply", "refill_milk", 1000, false)
net.AddArc("refill_milk", "milk", 1000, false)

The supply places start at zero — no restocking scheduled. When external events add tokens to beans_supply (a delivery arrives), the refill_beans transition becomes enabled and transfers 500g into the active inventory.

This creates a two-level conservation structure:

  • Inner loop: (within one restocking cycle)
  • Outer loop: Deliveries increase as supply tokens arrive

The inner conservation law tells you whether beans leak. The outer structure tells you whether deliveries keep up with consumption.

Low-Stock Alerts

The model can trigger alerts based on inventory thresholds:

func CheckLowStock(state map[string]float64) map[string]bool {
    thresholds := map[string]float64{
        "coffee_beans": 100,  // 100g ~ 5 drinks
        "milk":         500,  // 500ml ~ 3 lattes
        "cups":         10,
    }

    alerts := make(map[string]bool)
    for ingredient, threshold := range thresholds {
        if state[ingredient] < threshold {
            alerts[ingredient] = true
        }
    }
    return alerts
}

In a guard-based formulation (Chapter 4), these thresholds would be guards on the make transitions:

(action make_latte :guard {tokens(coffee_beans) >= 118 && tokens(milk) >= 680})

The guard checks not just whether there’s enough for this drink, but whether stock is above the alert threshold plus one recipe’s worth. This is precautionary guarding — blocking production before the resource reaches critical levels.

Full Day Simulation

The most revealing analysis simulates an entire business day with varying demand:

func RunDaySimulation(peakHours []int, baseRate float64) {
    currentState := initialState

    for hour := 6; hour <= 20; hour++ {
        multiplier := 1.0
        for _, peak := range peakHours {
            if hour == peak {
                multiplier = 2.5  // Rush hour
            }
            if hour == peak-1 || hour == peak+1 {
                multiplier = 1.5  // Shoulder hours
            }
        }

        rates := scaleRates(InventoryRates(), baseRate * multiplier)

        prob := solver.NewProblem(net, currentState,
            [2]float64{0, 60}, rates)
        sol := solver.Solve(prob, solver.Tsit5(), solver.FastOptions())
        currentState = sol.GetFinalState()

        stats := computeHourlyStats(currentState)
        fmt.Printf("%02d:00  %4s  drinks: %d  beans: %.0fg  alerts: %v\n",
            hour, rateLabel(multiplier), stats.drinks, stats.beans, stats.alerts)
    }
}

With peak hours at 8am and 12pm, the simulation shows:

  • Normal flow 6-7am as the shop opens
  • A surge at 8am (2.5x rate) that rapidly consumes beans and cups
  • Recovery during mid-morning as rates return to normal
  • A second surge at noon depleting remaining stock
  • Alerts triggering by early afternoon if restocking didn’t happen

The solver handles each hour independently — the final state of one hour becomes the initial state of the next. The FastOptions() preset keeps each solve under a millisecond, making the full-day simulation essentially instant.

From Model to Application

The coffee shop model demonstrates a development pattern that recurs throughout this book:

  1. Define the net — places for resources, transitions for operations, weighted arcs for recipes
  2. Verify structure — check P-invariants (does the model conserve resources?), check for deadlocks (can the system get stuck?)
  3. Simulate dynamics — run the ODE to see how resources flow over time
  4. Analyze bottlenecks — read equilibrium values and sensitivity rankings
  5. Build the application — use the verified model as the backend for operational decisions

The Petri net isn’t a simulation that approximates the real system. It is the system specification. The arc weights are the recipes. The conservation laws are the accounting rules. The equilibrium values are the capacity limits. When you change the model, you change the system.

This is the ResourceNet pattern: tokens count fungible things, arc weights specify recipes, conservation laws guarantee integrity, and ODE simulation answers capacity questions. The same pattern appears wherever you count things — inventory management, budget allocation, resource scheduling, supply chain modeling.

Try it live: Explore the Coffee Shop model at pilot.pflow.xyz to interactively adjust capacity and see how resources flow.

The next chapter applies a different pattern — the GameNet — to tic-tac-toe, where tokens encode board positions and turn order, and the hypothesis evaluator finds optimal moves.

Game Mechanics — Tic-Tac-Toe

Learning objective: Encode board positions, turns, and win conditions as token flow; use ODE simulation for move evaluation without game-specific heuristics.

The coffee shop modeled resources — fungible tokens flowing through recipes. This chapter models a game — structured tokens encoding board state, turn order, move history, and win detection. The Petri net is the same formalism, but the pattern is fundamentally different: a GameNet instead of a ResourceNet.

The central insight: strategic value emerges from model topology alone. We don’t code “prefer the center” or “block two-in-a-row.” The Petri net’s structure — which positions participate in more winning patterns — creates these preferences automatically through the ODE dynamics. The model discovers tic-tac-toe strategy the way mass-action kinetics discovers chemical equilibrium: by following the rates.

The Board as Places

A tic-tac-toe board has nine cells. Each cell is a place, initialized with one token meaning “available”:

P00: 1    P01: 1    P02: 1
P10: 1    P11: 1    P12: 1
P20: 1    P21: 1    P22: 1

The naming convention is P{row}{col}, zero-indexed. P00 is the top-left corner, P11 is the center, P22 is the bottom-right corner.

When a player claims a cell, the token is consumed from the position place. An empty place () means the cell is occupied. This is the enabling condition in action: a move transition for cell requires . Once the token is consumed, no other transition can claim that cell. The bipartite structure of the Petri net enforces the rule “you can’t play in an occupied cell” without any conditional logic.

Turn Control via Mutual Exclusion

Tic-tac-toe alternates between X and O. A single place Next controls whose turn it is:

  • Next = 0 means X’s turn
  • Next = 1 means O’s turn

Each X move transition produces a token into Next:

P_ij  -->  PlayX_ij  -->  X_ij
                      -->  Next

Each O move transition consumes a token from Next:

Next  -->  PlayO_ij  -->  O_ij
P_ij  -->

X goes first (Next starts at 0, so O transitions are blocked). After X plays, Next becomes 1, enabling O. After O plays, Next returns to 0. The alternation is structural — no turn counter, no modular arithmetic, just a token bouncing between “available” and “consumed.”

This is the same mutual exclusion pattern from the traffic intersection in Chapter 1. The Next token is a shared resource that enforces sequential access, exactly like the shared token that prevents both lights from being green simultaneously.

History Places for Pattern Detection

The basic model tracks where tokens are (board state) but not where they came from (move history). Adding history places upgrades the model from “what’s on the board” to “who played where.”

For each cell, two history places record which player claimed it:

X00, X01, X02, ..., X22    (9 places — X's moves)
O00, O01, O02, ..., O22    (9 places — O's moves)

When X plays at position (1,1):

  • Consume token from P11 (cell becomes occupied)
  • Produce token in X11 (record X played here)
  • Produce token in Next (pass turn to O)

When O plays at position (0,2):

  • Consume token from Next (it’s O’s turn)
  • Consume token from P02 (cell becomes occupied)
  • Produce token in O02 (record O played here)

The history layer doesn’t change the game mechanics — it adds information without altering what moves are legal. But it enables win detection, which needs to know not just that a cell is occupied, but which player occupies it.

The Full Place Set

The complete model has 30 places:

CategoryPlacesCount
Board positionsP00–P229
X historyX00–X229
O historyO00–O229
Turn controlNext1
Win detectionWinX, WinO2

And a conservation constraint ties them together:

Every cell is in exactly one state: available (), claimed by X (), or claimed by O (). The total is always 9 — the board size. This is a P-invariant verified by the incidence matrix.

In the DSL

Using the struct tag syntax from Chapter 4:

type TicTacToe struct {
    _ struct{} `meta:"name:tic-tac-toe,version:v1.0.0"`

    // Board positions (1 = available)
    P00 dsl.TokenState `meta:"initial:1"`
    P01 dsl.TokenState `meta:"initial:1"`
    // ... P02 through P22

    // X move history (0 = not played)
    X00 dsl.TokenState `meta:"initial:0"`
    X01 dsl.TokenState `meta:"initial:0"`
    // ... X02 through X22

    // O move history
    O00 dsl.TokenState `meta:"initial:0"`
    // ... O01 through O22

    // Turn control and win detection
    Next dsl.TokenState `meta:"initial:0"`
    WinX dsl.TokenState `meta:"initial:0"`
    WinO dsl.TokenState `meta:"initial:0"`

    // Move actions (18 total: 9 for X, 9 for O)
    PlayX00 dsl.Action `meta:""`
    PlayO00 dsl.Action `meta:""`
    // ...

    // Win detection (16 total: 8 for X, 8 for O)
    XRow0 dsl.Action `meta:""`  // X00, X01, X02
    XDg0  dsl.Action `meta:""`  // X00, X11, X22
    // ...
}

The flows define the arc structure:

func (TicTacToe) Flows() []dsl.Flow {
    return []dsl.Flow{
        // X moves: Position -> PlayX -> History + Next
        {From: "P00", To: "PlayX00"},
        {From: "PlayX00", To: "X00"},
        {From: "PlayX00", To: "Next"},

        // O moves: Next + Position -> PlayO -> History
        {From: "Next", To: "PlayO00"},
        {From: "P00", To: "PlayO00"},
        {From: "PlayO00", To: "O00"},

        // Win detection: 3-in-a-row -> WinX
        {From: "X00", To: "XRow0"},
        {From: "X01", To: "XRow0"},
        {From: "X02", To: "XRow0"},
        {From: "XRow0", To: "WinX"},
        // ... all 8 patterns for each player
    }
}

The total model: 30 places, 34 transitions, 118 arcs. Every rule of tic-tac-toe — valid moves, turn alternation, win conditions — encoded purely in arc structure.

Win detection is where the model becomes compositionally interesting. There are 8 winning patterns in tic-tac-toe: 3 rows, 3 columns, 2 diagonals. Each pattern is a transition that consumes three history tokens and produces a win token.

For the top row (X wins):

X00 --> XRow0 --> WinX
X01 -->
X02 -->

The transition XRow0 is enabled when AND AND — all three top-row cells claimed by X. When it fires, it produces a token in WinX.

This is a pattern collector: a transition that recognizes a specific configuration of history tokens. Each winning pattern gets its own collector. No conditional logic, no “check all rows and columns” procedure — just 8 independent transitions, each watching for its specific three-token combination.

The 16 pattern collectors (8 for X, 8 for O) are the complete game-over detection system. When any one fires, the game has been won. Multiple can potentially fire (if a player completes two lines simultaneously), but in practice the first one to fire determines the winner.

Why Composition Matters

Each pattern collector is independent. Adding a new win condition (say, for a variant game) means adding a new transition with three input arcs and one output arc. Removing a condition means removing a transition. The rest of the model is untouched.

This is the compositional advantage of Petri nets over procedural game logic. In code, win detection is typically a function that iterates over patterns:

winPatterns := [][]string{
    {"00", "01", "02"}, {"10", "11", "12"}, ...
}
for _, pattern := range winPatterns {
    if allMarked(pattern) { return true }
}

The procedural version works, but it’s monolithic — changing the win conditions means changing the function. The Petri net version is modular — each pattern is an independent structural element that composes with the rest.

Position Value from Topology

Here’s the key insight that connects game structure to strategy. Each board position participates in a different number of winning patterns:

PositionPatternsType
Center (1,1)4Row + Column + 2 Diagonals
Corners (0,0), (0,2), (2,0), (2,2)3Row + Column + 1 Diagonal
Edges (0,1), (1,0), (1,2), (2,1)2Row + Column

Position value heatmap — incidence degree to win transitions

The center participates in 4 winning patterns. Corners participate in 3. Edges participate in 2. This structural fact — encoded in the arc connectivity — is the source of strategic value. More connections to pattern collectors means more paths to victory, which the ODE dynamics translate into higher scores.

No game theory is needed to derive this. It falls directly from the net’s topology.

ODE-Guided Strategy

With the model defined, we can use the continuous relaxation to evaluate moves. The algorithm is simple:

  1. For each available move, create a hypothetical state after making that move
  2. Run the ODE simulation forward from that state
  3. Read the final values of WinX and WinO
  4. Score = my_win - opponent_win

The move with the highest score is the best move.

The Scoring Function

targetPlace := "win_x"
oppPlace := "win_o"
if currentTurn == PlayerO {
    targetPlace = "win_o"
    oppPlace = "win_x"
}

for _, move := range availableMoves {
    // Create hypothetical state
    hypState := copyState(currentState)
    hypState[position] -= 1
    hypState[historyPlace] += 1

    // Run ODE
    prob := solver.NewProblem(net, hypState,
        [2]float64{0, 3.0}, rates)
    sol := solver.Solve(prob, solver.Tsit5(), opts)
    final := sol.GetFinalState()

    // Score
    score := final[targetPlace] - final[oppPlace]
}

The ODE simulation with mass-action kinetics explores all possible continuations simultaneously. When X has tokens in positions that participate in many patterns, the flow toward WinX is higher. When O can block a pattern, that flow is diverted. The final token counts in WinX and WinO represent the aggregate outcome across all possible game continuations — weighted by their structural likelihood.

What the ODE Discovers

Empty board evaluation. X evaluates all 9 positions. The ODE reveals:

  • Center (1,1): score 1.27 — highest, because 4 pattern connections create the most flow toward WinX
  • Corners: score 0.95 — second, 3 patterns each
  • Edges: score 0.63 — lowest, 2 patterns each

The ODE has “discovered” the opening strategy: play the center. No minimax tree, no alpha-beta pruning, no opening book — just mass-action kinetics flowing through the net’s topology.

Responding to center. After X takes center, O evaluates defensive options. All scores are negative (X has the advantage), but corners (-1.04) minimize X’s lead better than edges (-1.38). The ODE discovers the defensive principle: play corners against center.

Finding winning threats. When X has center and a corner, the ODE identifies the opposite corner as the best move (score 1.39) because it creates a two-way threat — a fork that completes one diagonal while opening another. The pattern collectors naturally amplify fork positions because they have higher connectivity.

Blocking. When X has two in a row, O’s best move is the blocking position (score 0.80 vs. 0.74-0.77 for other moves). The ODE discovers blocking because the flow toward WinX is dramatically higher along the threatened pattern. Placing a token to block cuts off that flow.

Performance

Each move evaluation requires one ODE solve. With 9 possible moves on an empty board, that’s 9 solves. Using tuned solver options:

opts := &solver.Options{
    Dt:     0.2,
    Reltol: 1e-3,
}

Each solve takes roughly 4 milliseconds, making a full game (about 45 total evaluations across all moves) complete in about 1.8 seconds. Fast enough for interactive play.

ODE vs. Random: Results

Running the ODE AI against a random player over many games:

MatchupX WinsO WinsDraws
ODE vs Random~95%~2%~3%
Random vs ODE~15%~70%~15%
ODE vs ODE~10%~5%~85%

The ODE AI dominates random play. When both players use ODE evaluation, the game usually draws — the correct outcome for optimal play. The model, without any game-specific heuristics, approximates optimal strategy.

Game Halting and Draw Detection

A subtle but critical detail: what happens when a player wins? In the basic model, tokens continue flowing after a win — the ODE doesn’t know the game should stop. This distorts strategic values because the simulation averages over impossible continuations.

The fix is game halting: win transitions consume the turn token without returning it.

X00 --> XRow0 --> WinX
X01 -->
X02 -->
Next -->             (consume turn -- game stops)

When X wins, the XRow0 transition consumes Next (it was O’s turn). No turn token is returned. All further move transitions are blocked because they require either an empty position token or a turn token, and the turn token is gone. The win state becomes an absorbing state — once reached, no further flow is possible.

Draw Detection

Draw detection adds a move counter. Each play transition deposits a token into move_tokens. A draw transition fires when:

  • 9 move tokens have accumulated (all squares filled)
  • game_active is still marked (no winner)

The draw transition awards a point to WinO, encoding the principle that “a draw is a win for the defender.” This changes the ODE dynamics:

Without draw detection: O’s scores are always negative. The model only sees win paths, so it favors positions with more winning lines even when blocking is essential.

With draw detection: O’s scores become positive because draws count as partial victories. Blocking a threat now has measurable value — it preserves the possibility of a draw, which has positive worth.

The Integer Reduction

The ODE scores from the empty board — center 1.27, corners 0.95, edges 0.63 — are suspiciously proportional. Divide each by the smallest:

The ODE is tracking three integers. Not computing them exactly — the ratios are approximate, within a few percent — but recovering the same ranking and grouping that the integers predict.

Incidence Degree to Terminals

The win transitions in this model are terminal — they are sinks in the net. The number of arcs from a position’s history place to downstream win transitions is its incidence degree. This is a graph property — count the arcs, get an integer. No simulation needed.

PositionWin patternsIncidence degree
Center (1,1)Row + Column + 2 Diagonals4
Corners (0,0), (0,2), (2,0), (2,2)Row + Column + 1 Diagonal3
Edges (0,1), (1,0), (1,2), (2,1)Row + Column2

The full board heatmap is [3, 2, 3, 2, 4, 2, 3, 2, 3]. The strategic value of a position in a game with terminal win states is determined by its incidence degree to those terminals. The ODE recovers this ranking through dynamics; the graph gives it directly.

Generalizing to Any Board Size

The incidence degree formula works for any board. Each position at gets:

  • 2 (its row + its column), always
  • +1 if on the main diagonal ()
  • +1 if on the anti-diagonal ()

So the only possible degrees are {4, 3, 2}, and degree 4 only occurs at the center of odd-sized boards. Running the ODE with uniform rates across board sizes from 3×3 to 7×7 confirms that the same predictive behavior holds at every scale:

  • Positions with the same incidence degree always receive the same ODE score
  • Higher incidence degree always produces a higher score
  • The ranking center > diagonal > non-diagonal is preserved

The ODE ratios approximate the integer ratios closely for small boards (~2% error at 3×3) and less closely for larger ones (~10% at 7×7), because more positions compete for flow through shared win transitions. But the ranking — which is what matters for move selection — never changes. The incidence degree is an exact predictor of ODE ranking for all tested board sizes.

Dynamic Evaluation: Injecting Current State

The empty board is the easy case. The real utility comes from injecting the current game state as the marking and recomputing.

When a position is occupied, its token has been consumed — it drops out of the calculation. The incidence degree is now computed only against still-reachable win transitions. A corner that participates in 3 win patterns on an empty board might participate in only 1 if the opponent has blocked the other two. The heatmap updates to reflect the live topology.

This is the general “next step” evaluator. Given any board state:

  1. Identify which win transitions are still reachable (not blocked by opponent pieces)
  2. For each empty position, count the number of reachable win transitions it connects to
  3. The highest count is the best move

No search tree. No minimax. No alpha-beta pruning. Just count the edges to live terminals.

The Boundary Between Counting and Simulation

Tic-tac-toe reduces to {4, 3, 2} precisely because it is simple enough that topology is all you need. The win transitions are independent sinks. No position participates in resource competition with another. The incidence structure has no interesting dynamics — just a direct count.

For more complex games — poker with its multi-phase betting structure, or governance models with competing resource flows — the incidence structure is rich enough that positions do not reduce to small integers. Multiple paths interact, resource constraints create bottlenecks, and the dynamics through the net have real work to do. The ODE earns its keep in those cases.

But the mechanism is identical:

  1. Model the system as a Petri net with terminal (win/loss/goal) transitions
  2. Inject the current state as the marking
  3. Compute the evaluation from token flow to reachable terminals

For simple nets, step 3 is integer counting. For complex nets, step 3 is ODE simulation. The boundary between them is the boundary between systems where topology alone determines strategy and systems where dynamics matter.

What we have is a static evaluation function defined purely by net topology and current state. No heuristics, no training data, no game-specific knowledge beyond the Petri net model itself. For any game or system modeled as a Petri net with designated terminal states, the function is: for each position, how connected is it to winning? The Petri net is the domain knowledge — the topology encodes the rules, and the incidence structure encodes the strategy.

The GameNet Pattern

Tic-tac-toe demonstrates the GameNet pattern from Chapter 4:

  1. Board state as places — each cell is a place, tokens mark availability
  2. Move history as places — separate tracking of who played where
  3. Turn control as a shared token — mutual exclusion enforces alternation
  4. Pattern collectors as transitions — compositional win detection
  5. ODE scoring — strategic value emerges from topology
  6. Integer reduction — for simple games, the ODE collapses to incidence degree counting

The same pattern scales to more complex games. A Connect Four model would have 42 position places (7 columns × 6 rows) and more pattern collectors (horizontal, vertical, diagonal sequences of 4). A Go model would have 361 position places. The complexity is in the number of places and patterns, not in the logic — the logic is always the same: tokens flow, patterns collect, scores emerge. And when the game is simple enough, the scores are integers you can read off the graph without running the solver at all.

The mathematical foundation is identical to the coffee shop. The incidence matrix encodes all arcs. Conservation laws guarantee no tokens are created or destroyed. The ODE simulation finds the natural flow. The difference is interpretation: in the coffee shop, tokens are grams of beans; in tic-tac-toe, tokens are board positions and move records. The mathematics doesn’t care.

Try it live: Play against the ODE solver in the Tic-Tac-Toe demo or try the ZK variant for privacy-preserving moves at pilot.pflow.xyz.

The next chapter applies Petri nets to constraint satisfaction — modeling Sudoku as a system where arc weights and conservation laws enforce the puzzle’s rules, and the ODE relaxation finds valid configurations. Chapter 13 returns to the integer reduction from a different angle: deriving rate constants automatically from graph connectivity, and feeding them directly into the ZK verification pipeline.

Constraint Satisfaction — Sudoku

Learning objective: Express constraints as token conservation and solve via continuous relaxation.

Chapters 5 and 6 applied Petri nets to resource management and game mechanics. This chapter applies them to a different problem class: constraint satisfaction. Sudoku is a puzzle where you fill a grid such that every row, column, and block contains each digit exactly once. There’s no optimization, no adversary, no time — just constraints.

The Petri net encoding follows the same three-layer pattern from tic-tac-toe: cells as places, digit placements as transitions, and constraint collectors that fire when a row, column, or block is complete. The ODE simulation ranks candidate moves by how much flow they direct toward the solved place — no backtracking search required.

The Puzzle

We’ll use a 4×4 mini-Sudoku to keep things comprehensible:

  Initial:            Solution:
+---+---+---+---+    +---+---+---+---+
| 1 |   | . | . |    | 1 | 2 | 4 | 3 |
+---+---+---+---+    +---+---+---+---+
| . | . | 2 | . |    | 3 | 4 | 2 | 1 |
+===+===+===+===+ -> +===+===+===+===+
| . | 3 | . | . |    | 2 | 3 | 1 | 4 |
+---+---+---+---+    +---+---+---+---+
| . | . | . | 4 |    | 4 | 1 | 3 | 2 |
+---+---+---+---+    +---+---+---+---+

Constraints: each row, column, and 2×2 block must contain digits 1–4 exactly once. The 9×9 version uses digits 1–9 with 3×3 blocks — same structure, just more of it.

Cell Places and Digit Transitions

Layer 1: Cell Places

Each cell in the grid is a place. A token means “this cell is empty”:

P00: 0  P01: 1  P02: 1  P03: 1
P10: 1  P11: 1  P12: 0  P13: 1
P20: 1  P21: 0  P22: 1  P23: 1
P30: 1  P31: 1  P32: 1  P33: 0

Given cells (1 at position 00, 2 at position 12, 3 at position 21, 4 at position 33) have no token — they’re already filled.

Layer 2: Digit Transitions and History

For each cell × digit combination, a transition represents “write this digit in this cell”:

Transition D2_01:             History place _D2_01:
"Write digit 2 at (0,1)"     "Digit 2 is at (0,1)"

      +------+
P01 --| D2   |---> _D2_01
(1)   | _01  |     (0 -> 1)
      +------+

Firing consumes the cell token (cell is now filled) and produces a history token (recording which digit was placed). This is identical to the tic-tac-toe pattern: P_ij → PlayX_ij → X_ij.

For a 4×4 grid: 16 cells × 4 digits = 64 digit transitions and 64 history places. Most of these transitions will be irrelevant for any given puzzle (you can’t place a 1 in a row that already has a 1), but the ODE simulation handles this naturally — conflicting placements produce zero flow.

Layer 3: Constraint Collectors

When all required digits appear in a row, column, or block, a constraint collector transition fires and deposits a token in the solved place:

_D1_00 --> Row0_Complete --> solved
_D2_01 -->
_D3_02 -->
_D4_03 -->

The Row0_Complete transition requires history tokens for all four digits in row 0. When all four are present — meaning row 0 has digits 1, 2, 3, and 4 — the collector fires.

For the 4×4 grid: 4 row collectors + 4 column collectors + 4 block collectors = 12 constraint collectors. Maximum tokens in solved = 12. A fully solved puzzle has all 12 constraints satisfied.

This is a direct generalization of the tic-tac-toe pattern collectors. In tic-tac-toe, collectors watched for three-in-a-row. In Sudoku, collectors watch for complete rows, columns, and blocks. Same mechanism, different patterns.

How the Model Works

Step-by-step for cell (0,1) — currently empty:

  1. Initial: P01 = 1 (cell is empty, available for placement)

  2. Consider digit 2: Transition D2_01 is enabled because P01 has a token

  3. Fire D2_01:

    • P01: 1 → 0 (cell now filled)
    • _D2_01: 0 → 1 (history records “digit 2 at position (0,1)”)
  4. Constraint progress: When row 0 eventually has all four digits placed, Row0_Complete fires and solved gets +1 token

The model doesn’t encode which digit goes where — that’s the puzzle’s job. It encodes what constitutes a valid solution: all constraint collectors firing. The ODE simulation explores the space of possible placements and measures how much flow reaches solved.

ODE Prediction for Move Ordering

The same hypothesis evaluation from tic-tac-toe applies:

For each candidate move:
  1. Create hypothetical state (after placing this digit)
  2. Run ODE simulation (t=0 to t=3.0)
  3. Measure token flow to 'solved' place
  4. Higher flow = move more likely leads to solution

Why This Works

The constraint collectors create asymmetric flow in the ODE. A digit placement that satisfies constraints in multiple directions (its row, column, and block) creates more flow toward solved than one that only helps one constraint. The mass-action kinetics amplify this: more enabled collectors means higher aggregate flow rate.

A placement that creates a conflict — placing a 2 in a row that already has a 2 — produces zero flow through the corresponding row collector. The ODE naturally penalizes conflicts without any explicit “conflict detection” logic.

Move Ranking Example

For cell (0,1) in the 4×4 puzzle, evaluating each digit:

Digit 2: solved flow = 0.31  <-- Correct answer
Digit 3: solved flow = 0.28
Digit 4: solved flow = 0.15
Digit 1: blocked (conflict -- row 0 already has 1)

The ODE ranks digit 2 highest because it enables the most downstream constraint satisfaction. Digit 1 is blocked because its row collector can never fire (the history token for “1 in row 0” already exists from the given cell).

Tuning for Speed

A naive ODE simulation of the full model is slow. But we don’t need precision — we need ranking. Two moves scored at 0.31 and 0.28 will maintain their relative order whether we simulate to t=3.0 or t=10.0.

This enables aggressive tuning:

ParameterDefaultTunedWhy
Time horizont=10.0t=3.0Only need relative rankings
Abstol1e-61e-4Looser tolerance, faster solve
Step size (dt)0.010.2Larger steps, fewer iterations
Max iterations1000001000Early termination OK

The performance impact:

ModelNaiveTunedSpeedup
4×4 (64 transitions)0.8s0.04s20×
9×9 (729 transitions)~60s~3s20×

This makes interactive analysis feasible — evaluating all candidates for a cell takes seconds, not minutes.

Scaling from 4×4 to 9×9

The same pattern scales without structural changes:

Component4×49×9
Cell places1681
Digits49
History places64729
Digit transitions64729
Row collectors49
Column collectors49
Block collectors49
Max solved tokens1227

The 9×9 model has 729 transitions — one for each cell × digit combination. The constraint collectors expand from 12 to 27 (9 rows + 9 columns + 9 blocks). The structure is identical; only the dimensions change.

The ODE simulation time grows with the number of transitions (729 vs. 64), but the tuned solver still handles it in seconds. The key insight: we’re not solving Sudoku with the ODE. We’re ranking candidate moves. The ranking guides a discrete solver (backtracking, constraint propagation) toward the solution faster.

Constraint Satisfaction as Token Flow

The Sudoku model reveals a general pattern for constraint satisfaction problems:

  1. Variables become places (cells that can be filled)
  2. Domain values become transitions (digits that can be placed)
  3. Constraints become collector transitions (patterns that must be completed)
  4. Solution is a state where all collectors have fired

This pattern works for any CSP. An N-queens problem would have N² cell places, N queen transitions per column, and collectors for row uniqueness, column uniqueness, and diagonal uniqueness. A graph coloring problem would have vertex places, color transitions, and collectors for each edge (ensuring adjacent vertices have different colors).

The ODE simulation doesn’t solve the CSP directly — it provides a heuristic for variable and value ordering. In constraint programming, the order in which you assign variables and try values dramatically affects performance. The ODE’s flow-based ranking is a natural heuristic: try the assignment that creates the most flow toward the solution.

No hardcoded solver. Constraints emerge from topology. The same four-term primitive — cell, func, arrow, guard — encodes Sudoku, N-queens, graph coloring, and any other CSP. The structure changes; the mechanism doesn’t.

Optimization — The Knapsack Problem

Learning objective: Solve NP-hard problems approximately via mass-action kinetics.

The previous chapter used ODE simulation as a heuristic for constraint satisfaction — ranking moves, not solving directly. This chapter goes further: applying continuous relaxation to a combinatorial optimization problem where the ODE provides direct insight into the solution structure.

The 0/1 knapsack problem asks: given items with weights and values, select a subset that maximizes total value within a weight budget. It’s NP-hard. Traditional approaches use branch-and-bound — systematic enumeration with pruning. The Petri net approach uses mass-action kinetics — items compete for capacity through continuous dynamics. The competition reveals which items matter and which actively hurt the solution.

The 0/1 Knapsack as a Petri Net

Problem Definition

Four items with different weights and values, capacity budget of 15:

ItemWeightValueEfficiency (v/w)
item02105.0
item14123.0
item26122.0
item39161.78

The optimal solution takes items 0, 1, and 3 — total weight 15, total value 38. Item2 is excluded despite reasonable efficiency because its weight (6) prevents the higher-value item3 (weight 9) from fitting alongside items 0 and 1.

Net Structure

The Petri net encodes the knapsack as a dynamic system:

  • Item places: Each holds 1 token (item available) or 0 (item taken). This enforces the 0/1 constraint — you either take an item or you don’t.
  • Capacity place: Holds tokens representing available weight budget. Initial marking = 15 (the weight limit).
  • Take transitions: One per item. Consumes the item token plus capacity tokens equal to the item’s weight. Produces tokens into value and weight tracking places.

For item0 (weight 2, value 10):

item0 --> take_item0 --> value_taken
           ^                (weight 10)
capacity --+
 (weight 2)

The arc from capacity to take_item0 has weight 2 — taking item0 costs 2 units of capacity. The enabling condition requires AND . If the remaining capacity is less than 2, this item can’t be taken.

The critical structural property: all items compete for the same capacity pool. Taking item2 (weight 6) leaves less capacity for item3 (weight 9). This competition is encoded in the shared input place, not in any explicit exclusion logic.

Item Efficiency Encoded in Transition Rates

In the basic model, all transitions have rate 1.0. The mass-action dynamics determine how items are consumed:

With uniform rate constants ( for all items), the transition rate depends on the arc weight — items with larger weights have their rates modulated by higher powers of the remaining capacity. As capacity depletes, heavier items’ rates drop faster.

This creates a natural priority: lighter items are consumed first because they have lower-order dependence on capacity. Heavy items are left for later, and if capacity runs out, they’re stranded.

Continuous Relaxation and Rounding

Running the ODE simulation with uniform rates:

Final state (continuous approximation):
  Value accumulated:    35.71
  Weight used:          15.00
  Capacity remaining:    0.00

Item consumption (fraction taken):
  item0: 71.4% taken
  item1: 71.4% taken
  item2: 71.4% taken
  item3: 71.4% taken

The continuous relaxation takes fractional amounts of each item — all at approximately 71.4%. This is the nature of ODE simulation: it finds a smooth approximation rather than discrete 0/1 choices. The total value is 35.71, compared to the discrete optimum of 38.

The fractional solution is the LP (linear programming) relaxation of the knapsack. It’s always an upper bound on the discrete optimum — you can’t do better by restricting to integers. The gap between 35.71 and 38 tells us that simple rounding won’t find the optimum. We need more information about the solution structure.

Exclusion Analysis for Sensitivity

This is where the Petri net approach delivers insight that branch-and-bound doesn’t. Exclusion analysis disables each item’s transition one at a time and re-runs the ODE:

ExcludedFinal ValueRelative to Baseline
none35.71100.0%
item031.5888.4%
item135.2998.8%
item237.75105.7%
item332.0089.6%

Three of the four items behave as expected — excluding them reduces the total value. But item2 is anomalous: excluding it increases the value from 35.71 to 37.75.

This means item2 is actively hurting the solution. It’s not just “not included in the optimum” — it’s competing for capacity that other items would use more effectively. By consuming 6 units of capacity at an efficiency of only 2.0, item2 crowds out items 0 and 3 which have better value-for-weight ratios.

The exclusion analysis tells us exactly which item to remove from the model. No combinatorial search needed — one sensitivity pass reveals the suboptimal item.

Convergence After Exclusion

With item2’s transition disabled, the ODE converges toward the discrete optimum:

TimeValueGap to Optimal
t=1037.750.25
t=10037.970.03
t=100038.000.00

Given enough simulation time, the continuous relaxation without the suboptimal item converges to the exact discrete optimum of 38. The remaining items (0, 1, 3) are fully compatible — their combined weight equals the capacity exactly — so the ODE reaches the integer solution.

This is the power of combining structural analysis (exclusion) with continuous relaxation (ODE). The exclusion step eliminates competition from suboptimal items. The ODE step then finds the remaining optimum naturally.

Comparison with Branch-and-Bound

The two approaches solve the same problem through fundamentally different mechanisms.

Branch-and-bound treats optimization as search. It builds a decision tree where each node represents a binary choice: take this item or skip it. The algorithm explores branches, computing upper bounds to prune paths that can’t improve on the best solution found so far. It’s systematic enumeration with smart pruning — still exponential in the worst case, but practical for moderate problem sizes.

DDM/ODE treats optimization as simulation. Instead of explicit decisions, items compete for capacity through continuous dynamics. The mass-action kinetics means all enabled transitions fire simultaneously at rates proportional to available resources. There’s no decision tree — just differential equations evolving toward equilibrium.

AspectBranch-and-BoundDDM/ODE
Core operationBinary search treeContinuous dynamics
DecisionsExplicit (take/skip)Emergent (competition)
Solution typeExact integerFractional approximation
ComplexityExponential (pruned)Polynomial (ODE integration)
Primary output“What’s optimal?”“Why is it optimal?”

For exact solutions, branch-and-bound wins — it finds items 0, 1, 3 with value 38 directly. The ODE reaches 35.71 by taking fractional amounts of everything.

But the ODE reveals structure that search obscures. The exclusion analysis shows item2 is actively counterproductive. Branch-and-bound discovers this implicitly through pruning, but it doesn’t report it as an insight. The ODE makes the competition visible.

When to Use Each

DDM/ODE ApproachBranch-and-Bound
Exploratory analysisExact solutions
Sensitivity insightsGuaranteed optimum
Fast iterationProven correctness
Understanding whyFinding what

In practice, the approaches complement each other. Use ODE simulation to understand the problem structure — which items matter, which compete, where the bottlenecks are. Then use branch-and-bound (or dynamic programming) to find the exact solution, informed by the structural insights.

Beyond Four Items

The 4-item example is intentionally small. For larger instances:

  • 100 items: The ODE simulation remains polynomial — the number of transitions scales linearly. Exclusion analysis requires 100 ODE runs (one per item). Branch-and-bound may require exponentially many branches.

  • Correlated items: When items have similar efficiency ratios, the exclusion analysis shows which ones truly contribute and which are interchangeable. This information is invisible to pure optimization.

  • Multiple constraints: Adding a second constraint (volume, fragility) means adding a second capacity place with its own arc weights. The Petri net handles this naturally — each constraint is an independent place. Branch-and-bound becomes the multidimensional knapsack problem, which is significantly harder.

The Pattern

The knapsack model demonstrates a fourth application pattern, distinct from resources (Chapter 5), games (Chapter 6), and constraints (Chapter 7):

  1. Decision variables become places (items available or taken)
  2. Resource constraints become shared capacity places with weighted arcs
  3. Competition emerges from mass-action kinetics — items fight for capacity
  4. Exclusion analysis reveals which elements actively hurt the objective
  5. Continuous relaxation approximates the optimum; structural insight guides to the exact solution

This is optimization by simulation rather than search. The ODE doesn’t find the optimal solution directly — it reveals the solution’s structure. That structure, in turn, makes finding the optimum straightforward.

Try it live: Explore the Knapsack optimizer at pilot.pflow.xyz to see mass-action kinetics reveal optimal item selection.

Biochemistry — Enzyme Kinetics

Learning objective: See how Petri nets naturally model chemical reactions, recovering classical equations.

The previous chapters applied Petri nets to domains where the mathematical formalism is borrowed — resource flows, games, puzzles, optimization. This chapter returns to the domain where the formalism originated: chemistry. Enzyme kinetics is the study of how enzymes catalyze reactions, and the Michaelis-Menten equation that describes it is one of biochemistry’s most important results.

Textbooks derive Michaelis-Menten with steady-state assumptions and algebraic manipulation. We don’t need any of that. A 4-place Petri net with mass-action kinetics produces the same equation automatically. The structure produces the behavior.

Substrate, Enzyme, Product

An enzyme catalyzes the conversion of substrate to product in three steps:

  1. Binding: enzyme and substrate combine to form a complex (rate )
  2. Unbinding: the complex falls apart without producing product (rate )
  3. Catalysis: the complex converts substrate to product, releasing the enzyme (rate )

The enzyme is not consumed — it cycles between free and bound states. This cycling is the hallmark of catalysis, and it maps perfectly to Petri net token flow.

The Petri Net

Four places, three transitions:

PlaceInitialRole
substrate100Free substrate molecules
enzyme10Free enzyme molecules
complex0Enzyme-substrate complex
product0Converted product
TransitionRateArcs
bindsubstrate + enzyme → complex
unbindcomplex → substrate + enzyme
catalyzecomplex → product + enzyme

Enzyme kinetics reaction network and concentration curves over time

The arc structure encodes the chemistry directly:

substrate --> bind --> complex --> catalyze --> product
enzyme -->          /              enzyme <--+
             unbind
complex --> unbind --> substrate
                  --> enzyme

The enzyme appears as both input and output of catalyze — it’s consumed when the complex forms and returned when the product is released. This cyclic flow is the structural signature of catalysis.

In Code

net, rates := petri.Build().
    Place("substrate", 100).
    Place("enzyme", 10).
    Place("complex", 0).
    Place("product", 0).
    Transition("bind").
    Transition("unbind").
    Transition("catalyze").
    Arc("substrate", "bind", 1).
    Arc("enzyme", "bind", 1).
    Arc("bind", "complex", 1).
    Arc("complex", "unbind", 1).
    Arc("unbind", "substrate", 1).
    Arc("unbind", "enzyme", 1).
    Arc("complex", "catalyze", 1).
    Arc("catalyze", "product", 1).
    Arc("catalyze", "enzyme", 1).
    WithRates(map[string]float64{
        "bind":     0.01,
        "unbind":   0.1,
        "catalyze": 0.5,
    })

The net is complete. Four places, three transitions, nine arcs. Every biochemistry textbook’s enzyme kinetics chapter is encoded in this structure.

Mass-Action Kinetics as Native Semantics

With mass-action kinetics, each transition fires at a rate proportional to the product of its input concentrations:

The ODE system follows directly from the Petri net topology. Using the incidence matrix and rate vector :

We didn’t write these equations. They emerged from the net structure and mass-action kinetics — exactly the process described in Chapter 3. The incidence matrix encodes all the stoichiometry. The mass-action rate law provides the dynamics. The ODE solver does the rest.

Emergent Michaelis-Menten Behavior

At steady state (), the complex concentration stabilizes:

Solving for the reaction rate and using :

where and .

This is the Michaelis-Menten equation. We didn’t derive it — the ODE solver produces the saturation curve directly from the Petri net. Running the simulation with increasing initial substrate concentrations traces out the classic hyperbolic curve: rate increases linearly at low [S], then levels off as enzyme becomes saturated.

The Saturation Effect

Why does the rate saturate? The Petri net makes it visible through token flow.

At low substrate, most enzyme tokens are in the enzyme place (free). Increasing [S] means more bind events, so the rate climbs linearly. But at high substrate, almost all enzyme tokens are tied up in complex. Adding more substrate can’t speed things up — there’s no free enzyme to bind with.

is the substrate concentration at half-maximum rate. It measures binding affinity. With the default parameters: . You need 60 substrate molecules to reach half the maximum rate.

is the theoretical maximum rate when all enzyme is saturated. With 10 enzyme molecules and : .

Both constants emerge from the simulation without being computed algebraically.

Simulation Predictions

The ODE solver computes several predictions from the initial conditions and rate constants:

PredictionDefault ValueWhat It Means
Time to 50% conversion~17sWhen half the substrate is consumed
Peak reaction rate~2.95Maximum d[P]/dt, approaches Vmax
Final yield~70%Percentage of substrate converted
/ 60 / 5.0The Michaelis-Menten constants

The concentration curves show the classic pattern: substrate depletes as product accumulates, with the enzyme-substrate complex rising quickly then falling as substrate runs out.

Conservation Laws from P-Invariants

The Petri net enforces conservation laws automatically through its structure:

Enzyme conservation:

Free enzyme plus bound enzyme always equals the initial enzyme count. This isn’t coded as a constraint — it’s a structural property of the net. The enzyme token circulates through enzyme → complex → enzyme but never leaves the system.

Check via P-invariant: the weight vector (for places [substrate, enzyme, complex, product]) satisfies :

  • bind column:
  • unbind column:
  • catalyze column:

Mass conservation:

Substrate is either free, bound in complex, or converted to product. Mass is conserved because every arc that removes a token from one place adds it to another.

These invariants are verified at every timestep of the ODE simulation. If they ever fail, something is wrong with the model — a missing arc, an incorrect weight, a bug in the solver. Conservation laws are free structural correctness checks.

Parameter Exploration and Sensitivity

The model invites exploration by changing rate constants:

High enzyme, low substrate (): The reaction completes almost instantly. All substrate is converted because there’s plenty of enzyme. The rate curve is a sharp spike.

Low (): Binding becomes the bottleneck. shoots up to 600, meaning enormous substrate concentrations are needed for half-max rate. The enzyme is effectively less efficient.

High (): The enzyme turns over faster. increases to 20, but also increases to 210. Faster catalysis means the complex is consumed faster, so more substrate is needed to keep it populated.

Equal and (): Half the binding events are “wasted” on unbinding. — higher than default. The enzyme is less efficient.

Each parameter change is a different set of rate constants on the same net. The structure stays the same. The dynamics change. This is the pattern throughout the book: one model, many scenarios.

From Chemistry to Everything

The Michaelis-Menten pattern appears far beyond biochemistry. Anywhere a limited resource cycles between free and occupied states, the same saturation dynamics emerge:

  • CPU scheduling: processes (substrate) compete for cores (enzyme), creating context-switch overhead (complex formation)
  • Customer service: customers (substrate) wait for agents (enzyme), with a queue (complex) that saturates
  • Manufacturing: raw materials (substrate) need machines (enzyme), with work-in-progress (complex) limited by machine count

In each case, the Petri net structure is identical — only the labels change. Four places, three transitions, nine arcs. The same and characterize the system’s capacity and responsiveness.

This is the ComputationNet pattern from Chapter 4 in its purest form: continuous quantities, rate-based firing, conservation laws, and emergent behavior from structure. The enzyme kinetics model is the smallest non-trivial example — small enough to understand completely, yet producing one of the most important results in biochemistry.

Complex State Machines — Texas Hold’em

Learning objective: Model multi-player, multi-phase systems with roles and guards.

The previous chapters built progressively: resources (coffee shop), two-player games (tic-tac-toe), constraint puzzles (Sudoku), optimization (knapsack), and chemistry (enzyme kinetics). Each used a focused subset of Petri net features. Texas Hold’em poker demands all of them simultaneously: sequential phases, multi-player turn control, role-based permissions, conditional betting, and event sourcing for audit trails.

This chapter shows that the formalism scales. The same four-term primitive — cell, func, arrow, guard — handles a system with dozens of states, multiple interacting players, and complex conditional logic. No new machinery is needed. The complexity is in the model, not the formalism.

The Challenge

Poker has intricate state:

  • Phases: preflop → flop → turn → river → showdown — a sequential state machine
  • Turn order: Players act in sequence, skipping folded players
  • Actions: fold, check, call, raise — each with preconditions on chips, bets, and game state
  • Roles: Only the dealer can deal cards, only active players can bet, only the admin can determine winners
  • Audit: Every action must be logged for replay and verification

Traditional implementations scatter this logic across conditionals, state variables, and validation functions. A Petri net makes it explicit: the structure is the specification. Illegal states are structurally unreachable rather than checked-for at runtime.

Phase Places

The core flow is a WorkflowNet — a sequential state machine where a single token traces a path through phases:

waiting -> preflop -> flop -> turn_round -> river -> showdown -> complete

Each phase is a place. Phase transitions move the game forward:

preflop --> deal_flop --> flop
             ^
betting_done +

The deal_flop transition requires two things: a token in preflop (we’re in that phase) and a token in betting_done (all players have acted). Both must be present for the transition to fire. This ensures cards aren’t dealt mid-betting.

The phase token is a cursor — exactly one phase place holds a token at any time. The P-invariant:

This is the WorkflowNet guarantee: mutual exclusion. The game is always in exactly one phase.

Turn Tokens and Role-Based Access

Each player has a turn place: p0_turn, p1_turn, …, p4_turn. When it’s Player 0’s turn, only p0_turn holds a token. Player actions consume their turn token and produce the next player’s:

p0_check:
  inputs:  [p0_turn, p0_active]
  outputs: [p1_turn]

Player 0 can only check when two conditions hold: it’s their turn (p0_turn has a token) AND they’re still in the hand (p0_active has a token). After checking, the turn token moves to Player 1.

This is the same turn control mechanism from tic-tac-toe (Chapter 6), generalized to N players. The token passes in sequence, enforcing strict turn order without explicit checks.

Folding and Skipping

When a player folds, their p_active token is consumed:

p2_fold:
  inputs:  [p2_turn, p2_active]
  outputs: [p3_turn]
  // Note: p2_active is NOT returned

Subsequent rounds must skip folded players. A skip transition handles this:

p2_skip:
  inputs:  [p2_turn]  // No p2_active required
  outputs: [p3_turn]
  guard:   {!p2_active}  // Only fire when player is folded

The guard ensures the skip only fires for folded players. Active players can’t be skipped — they must act. This combination of structural enabling (turn tokens) and guards (active status) handles the full complexity of variable-player turn order.

Betting Conditions as Guard Expressions

Some actions require conditions beyond token availability. Guards from Chapter 4 handle this:

(action p0_raise
  :guard {bets[0] >= current_bet && chips[0] >= raise_amount})

This ensures:

  • Player has matched the current bet (can’t raise without calling first)
  • Player has enough chips to raise

Guards add business logic without changing the Petri net structure. The net handles what can happen (which transitions exist). Guards handle when it can happen (what conditions must hold). This separation is essential for complex systems — the structural properties (deadlock freedom, mutual exclusion) can be verified from the net alone, independent of the guard expressions.

The Full Action Set

For each player, four actions with their preconditions:

ActionPreconditionsEffect
foldTurn token + activeRemove from hand
checkTurn token + active + no outstanding betPass action
callTurn token + active + sufficient chipsMatch current bet
raiseTurn token + active + sufficient chipsIncrease bet

Five players × four actions = 20 player transitions, plus 5 skip transitions for folded players, plus 7 phase transitions (start, deal flop/turn/river, showdown, determine winner, end hand). Total: 32 transitions.

The Model Structure

The full model:

Places (17+):
  waiting, preflop, flop, turn_round,     -- Phase places
  river, showdown, complete
  p0_turn, p1_turn, p2_turn,              -- Turn tokens
  p3_turn, p4_turn
  p0_active, p1_active, p2_active,        -- Active markers
  p3_active, p4_active
  betting_done                            -- Sync signal

Transitions (32):
  start_hand, deal_flop, deal_turn,        -- Phase transitions
  deal_river, go_showdown,
  determine_winner, end_hand
  p0_fold, p0_check, p0_call, p0_raise,   -- Player 0 actions
  p1_fold, p1_check, p1_call, p1_raise,   -- Player 1 actions
  ...                                      -- (5 players x 4 actions)
  p0_skip, p1_skip, ...                    -- Skip folded players

Role-Based Access Control

Not all transitions are available to all players. The model uses roles to restrict who can fire which transitions:

RoleTransitionsPurpose
dealerdeal_flop, deal_turn, deal_riverOnly dealer advances phases
player0p0_fold, p0_check, p0_call, p0_raisePlayer-specific actions
adminend_hand, determine_winnerGame control

When deployed as a service, the API enforces these roles — a player can’t fire another player’s transitions. The Petri net defines what each role can do; the runtime enforces who has which role.

Event Sourcing for Audit Trails

Every transition firing creates an immutable event:

[
  {"action": "start_hand", "time": "10:00:00"},
  {"action": "p0_raise", "amount": 50, "time": "10:00:15"},
  {"action": "p1_call", "time": "10:00:23"},
  {"action": "p2_fold", "time": "10:00:31"},
  {"action": "deal_flop", "cards": ["Ah", "Ks", "7d"], "time": "10:00:35"}
]

The Petri net’s discrete transitions map perfectly to event sourcing:

  • Replay: Given the initial marking and the event sequence, reconstruct any game state by replaying transitions
  • Audit: Verify that every transition was enabled when it fired — illegal moves are provably impossible
  • Undo: Roll back to any previous state by replaying up to that point

This is the event sourcing pattern from the workspace’s CLAUDE.md: . Each event is a transition firing. Each state is a marking. The fold is the firing rule applied sequentially.

ODE Analysis of Game Flow

While the game runs discretely, ODE simulation provides strategic analysis. Converting the Petri net to continuous dynamics lets us evaluate position strength:

// For each possible action, create hypothetical state
// and simulate forward
for _, action := range availableActions {
    hypState := stateutil.Apply(currentState, action.Delta)

    prob := solver.NewProblem(net, hypState,
        [2]float64{0, 5.0}, rates)
    sol := solver.Solve(prob, solver.Tsit5(), solver.FastOptions())
    final := sol.GetFinalState()

    score := final["win_p0"] - final["win_p1"]
}

The ODE simulation explores possible game continuations:

  • Higher flow to win_p0 indicates stronger positions
  • Raising in a strong position increases flow to win places
  • Folding prevents further loss when position is weak

This is the same pattern as tic-tac-toe (Chapter 6), but with a much larger state space. The strategic insights are less precise — poker has hidden information (hole cards) that the ODE can’t fully model — but they provide useful guidance for position evaluation.

Hand Strength from Structure

The poker model can include draw potential places that track partial hands:

  • flush_draw: 4 cards of the same suit
  • straight_draw: 4 cards in sequence
  • overcards: hole cards above board cards

These places feed into scoring transitions that evaluate hand strength. The ODE simulation integrates over possible completions, producing an aggregate strength score. Combined with opponent range estimation (which hands the opponent likely holds given their actions), this creates a complete decision framework.

Compositional Design

The Texas Hold’em model is a composition of simpler patterns:

  1. WorkflowNet — Phase sequence (waiting → preflop → … → complete)
  2. ResourceNet — Chip tracking (bet placement, pot management)
  3. GameNet — Turn control and player actions
  4. Guards — Conditional betting logic

Each pattern is independently verifiable:

  • The phase sequence guarantees mutual exclusion (one phase at a time)
  • Chip tracking preserves conservation (total chips constant)
  • Turn control ensures exactly one player acts at a time
  • Guards prevent illegal bets

Composition adds links between patterns — the betting_done signal connects turn control to phase advancement — without breaking any individual guarantee. This is the assume-guarantee reasoning from Chapter 4: verify components in isolation, check only the boundaries.

Why Petri Nets for Complex Systems

Traditional game code mixes state, rules, and UI into monolithic logic. A Petri net separates concerns:

  • Structure = What’s possible (places, transitions, arcs)
  • State = What’s current (token marking)
  • Rules = What’s allowed (guards, roles)
  • History = What happened (events)

This separation makes complex systems:

  • Verifiable: Prove invariants (one player acts at a time, chips conserved)
  • Replayable: Deterministic event sequence from any initial state
  • Analyzable: ODE simulation for strategy, reachability for deadlock detection
  • Evolvable: Add rules (side pots, all-in) without rewriting existing logic

Texas Hold’em is at the complex end of the spectrum covered in Part II. It combines every pattern from the previous chapters into a single model. The formalism doesn’t struggle with this complexity — it absorbs it. The four-term primitive handles a coffee shop, a tic-tac-toe board, a Sudoku grid, a knapsack, an enzyme reaction, and a poker table with the same vocabulary.

Part III of this book shifts focus from building models to analyzing them — process mining, zero-knowledge proofs, and the tooling that makes Petri net development practical.

Process Mining

Learning objective: Discover Petri net models from event logs and use them for prediction.

The previous chapters built Petri net models by hand — designing places, transitions, and arcs to capture known rules. But what about systems where the rules aren’t written down? Where the actual process diverges from the official procedure? Process mining reverses the direction: instead of building a model and running it, you collect data from a running system and discover the model automatically.

This chapter covers the complete pipeline: event logs as raw material, discovery algorithms that extract Petri net structure, timing analysis that learns transition rates, and predictive monitoring that uses the learned model to forecast outcomes before they happen.

Event Logs: The Raw Material

Every information system generates event logs. Hospital admissions systems record when patients register, get triaged, see a doctor, and are discharged. ERP systems record when orders are placed, approved, shipped, and invoiced. The events are already there — process mining extracts structure from them.

An event log has three required fields:

FieldExamplePurpose
Case IDP101Which process instance
ActivityRegistrationWhat happened
Timestamp2024-11-21 08:15:00When it happened

Optional fields — resource (who did it), lifecycle (start/complete), cost, outcome — enrich the analysis but aren’t required for discovery.

A trace is the sequence of activities for a single case. Patient P101’s trace might be:

Registration -> Triage -> Doctor_Consultation -> Discharge

Patient P102, who needed lab work:

Registration -> Triage -> Doctor_Consultation -> Lab_Test -> Results_Review -> Discharge

These are process variants — different paths through the same system. In a typical hospital ER, the simple path (no lab, no imaging) accounts for about 60% of cases. Adding lab tests covers another 30%. X-rays, surgery, and other branches handle the remaining 10%. Process mining discovers these variants automatically from the log data.

Loading Event Logs

config := eventlog.DefaultCSVConfig()
log, _ := eventlog.ParseCSV("hospital_er.csv", config)

summary := log.Summarize()
// Parsed 1000 cases with 5200 events
// Found 6 unique activities
// Discovered 4 process variants

The eventlog package handles CSV and JSONL formats with configurable column mappings. The summary immediately reveals the process’s complexity: how many variants exist, how many activities are involved, and how many cases the log contains.

Process Discovery

Given an event log, a discovery algorithm produces a Petri net that explains the observed behavior. Three algorithms illustrate the design space.

Common-Path Discovery

The simplest approach: find the most frequent trace variant and build a linear Petri net for it.

Variant 1 (60% of cases): Reg -> Triage -> Doctor -> Discharge

Discovered model:
[start] -> Reg -> [p1] -> Triage -> [p2] -> Doctor -> [p3] -> Discharge -> [end]

Each activity becomes a transition. Places are inserted between transitions to carry the control flow token. A token in start means the case has arrived; a token in end means it’s complete; a token in p2 means the patient has been triaged and is waiting for the doctor.

Common-path discovery is fast and always produces a valid model, but it ignores all variant behavior. It’s useful as a starting point or when the process is genuinely linear.

discovery, _ := mining.Discover(log, "common-path")
net := discovery.Net
// Model covers 60% of cases

The Alpha Algorithm

The Alpha algorithm, introduced by van der Aalst in 2004, discovers concurrent structure — not just sequences. It works by analyzing ordering relations between activities.

The algorithm builds a footprint matrix from the event log. For every pair of activities (A, B), it classifies their relationship:

RelationSymbolMeaning
CausalityA → BA directly precedes B, never vice versa
ParallelA ‖ BBoth orderings observed (A before B and B before A)
ChoiceA # BNeither ordering observed (exclusive alternatives)

Given traces:

Case 1: A -> B -> D -> E
Case 2: A -> C -> D -> E

The footprint matrix reveals: A → B (causal), A → C (causal), B # C (choice — they never co-occur), B → D and C → D (both causal), D → E (causal).

From these relations, the Alpha algorithm constructs a Petri net:

       -> [B] ->
[A] ->          [D] -> [E]
       -> [C] ->

B and C are alternative paths after A, both converging at D. The algorithm discovers this branching structure automatically — no human specified that B and C are alternatives.

The implementation builds place candidates from maximal pairs of causally connected activity sets, then filters to keep only maximal candidates:

miner := mining.NewAlphaMiner(log)
net := miner.Mine()

The Alpha algorithm has known limitations: it can’t handle loops of length 1 or 2, and it’s sensitive to noise in the log. For real-world data, the heuristic miner is more robust.

Heuristic Miner

The heuristic miner adds noise tolerance by counting how often each activity pair occurs and applying thresholds:

Observations:
A -> B: 100 times
A -> C: 95 times
B -> D: 85 times
C -> X: 2 times   <-- noise (only 2% of cases)

With a 5% threshold, the C → X edge is filtered out. The resulting model is cleaner and more representative of the actual process, at the cost of ignoring rare behavior.

Timing Analysis and Rate Learning

Discovery tells us the process structure — which activities exist and how they connect. Timing analysis tells us the dynamics — how long each activity takes and how fast cases flow through the system.

Extracting Timing Statistics

stats := mining.ExtractTiming(log)

// Per-activity statistics
meanTriage := stats.GetMeanDuration("Triage")       // 615 seconds (10.3 min)
stdTriage := stats.GetStdDuration("Triage")          // 192 seconds
rateTriage := stats.EstimateRate("Triage")           // 0.001626 /sec

The ExtractTiming function computes, for each activity: mean duration, standard deviation, occurrence count, and estimated rate. It also computes inter-arrival times (how frequently new cases appear) and total case durations (end-to-end cycle time).

From Durations to Rates

The key conversion: . If triage takes an average of 10 minutes, the triage rate is per second. This assumes exponentially distributed service times — the same memoryless property that mass-action kinetics uses.

For a hospital ER with six activities:

ActivityMean DurationEstimated Rate
Registration12.5 min0.00133 /sec
Triage27.5 min0.000606 /sec
Doctor_Consultation31.2 min0.000533 /sec
Lab_Test97.5 min0.000171 /sec
Results_Review50.0 min0.000333 /sec
Discharge15.0 min0.00111 /sec

The bottleneck is immediately visible: Lab_Test has the lowest rate, meaning it takes the longest. Any effort to reduce end-to-end time should target the lab.

Learning Rates for Simulation

The learned rates plug directly into the ODE solver:

rates := mining.LearnRatesFromLog(log, net)

initialState := net.SetState(nil)
prob := solver.NewProblem(net, initialState, [2]float64{0, 10000}, rates)
sol := solver.Solve(prob, solver.Tsit5(), solver.DefaultOptions())

The simulation now uses rates derived from actual data rather than guesses. The ODE dynamics reproduce the observed flow patterns: tokens accumulate before the lab (the bottleneck), flow smoothly through registration and triage (fast steps), and drain out through discharge.

This closes the loop: event log → discovered model → learned rates → simulation → prediction. The same Petri net formalism works for hand-crafted models (Parts I and II) and data-driven models (this chapter). The mathematics doesn’t care where the model came from.

Predictive Monitoring

With a learned model, we can predict the future. A patient who has completed registration and triage but not yet seen a doctor — what’s their expected remaining time? Will they violate the 4-hour SLA?

State Estimation

The first step is mapping the patient’s event history to a Petri net marking. Each observed activity corresponds to firing a transition:

currentState := monitoring.EstimateCurrentState(activeCase, net)

The function replays the event history through the net: start with a token in the start place, fire each observed transition in sequence, and the resulting marking represents the patient’s current position in the process. If the patient has completed Registration and Triage, the token sits in the place between Triage and Doctor_Consultation.

Simulation-Based Prediction

From the current state, the ODE solver simulates forward to predict completion:

predictor := monitoring.NewPredictor(net, rates)
pred := predictor.PredictFromState(currentState, currentTime)

remainingTime := pred.PredictedEndTime - pred.CurrentTime
confidence := pred.Confidence

The predictor watches for a token to arrive in the end place. The time at which the token crosses a threshold (0.5) is the predicted completion time. The confidence score reflects how much token mass reaches the end — higher confidence means the simulation is more certain about the outcome.

SLA Violation Detection

The real value is early warning. At 9:30 AM, a patient who arrived at 8:00 AM has been in the system for 90 minutes. The model predicts 180 more minutes of processing (lab test is the bottleneck). Total predicted time: 270 minutes. The 4-hour SLA (240 minutes) will be violated.

The alert fires 2.5 hours before the violation — enough time for intervention. Expedite the lab order, assign additional staff, or notify the patient about the expected wait.

Next Activity Prediction

Beyond completion time, the model predicts which activity happens next and how likely each option is:

predictions := monitoring.PredictNextActivity(activeCase, predictor)
// [{Activity: "Doctor_Consultation", Probability: 0.85, ExpectedTime: 15min},
//  {Activity: "Lab_Test", Probability: 0.15, ExpectedTime: 45min}]

The prediction uses mass-action kinetics: each enabled transition fires at a rate proportional to its rate constant times the product of its input place tokens. The transition with the highest effective rate is most likely to fire first. The probability is the ratio of each transition’s effective rate to the total.

The Hospital ER Case Study

Putting it all together with the mining demo:

// 1. Parse event log
log, _ := eventlog.ParseCSV("hospital.csv", config)
// 1000 cases, 5200 events, 6 activities, 4 variants

// 2. Extract timing
stats := mining.ExtractTiming(log)
// Registration: 12.5 min, Triage: 27.5 min, Doctor: 31.2 min
// Lab_Test: 97.5 min (bottleneck), Results_Review: 50.0 min

// 3. Discover model
discovery, _ := mining.Discover(log, "common-path")
net := discovery.Net
// 7 places, 6 transitions, covers 60% of cases

// 4. Learn rates
rates := mining.LearnRatesFromLog(log, net)

// 5. Simulate
prob := solver.NewProblem(net, net.SetState(nil), [2]float64{0, 10000}, rates)
sol := solver.Solve(prob, solver.Tsit5(), solver.DefaultOptions())

// 6. Use for prediction
predictor := monitoring.NewPredictor(net, rates)

The discovered model matches the observed behavior: tokens flow from registration through discharge, accumulating before the lab (the bottleneck). The simulation predicts an average case duration of ~166 minutes, consistent with the timing statistics extracted from the log.

What-If Analysis

The learned model enables scenario planning. “What if we add another lab technician?” — double the Lab_Test rate and re-simulate. “What if we add a fast-track for simple cases?” — add a bypass transition from Doctor to Discharge with a guard on case complexity.

Each scenario is a parameter change or structural modification to the same Petri net. The ODE solver produces updated predictions in milliseconds. No new data collection needed — just adjust the model and simulate.

Conformance Checking

Discovery produces a model. Conformance checking asks: how well does the model match reality?

Fitness measures how many traces in the log can be replayed on the model. If the model says A must precede B, but a trace shows B before A, that trace doesn’t fit. Fitness = (fitting traces) / (total traces). A fitness of 1.0 means every trace is consistent with the model.

Precision measures how much of the model’s allowed behavior actually occurs. A model that allows everything has perfect fitness but zero precision — it’s too general to be useful. A model that captures only the observed paths has high precision.

Good models balance both: high fitness (captures real behavior) with high precision (doesn’t over-generalize). The trade-off is complexity — more complex models fit more traces but allow more unobserved paths.

conf := mining.CheckConformance(log, net)
// Fitness: 0.87, FittingTraces: 870/1000

The 13% non-fitting traces represent process variants not captured by the common-path model. Switching to the Alpha algorithm or heuristic miner would increase fitness at the cost of model complexity.

From Data to Model to Prediction

Process mining completes the Petri net toolkit. Parts I and II showed how to build models from domain knowledge — encoding rules as structure. This chapter shows how to extract models from data — discovering rules from behavior.

The pipeline:

  1. Collect event logs from existing systems
  2. Discover process structure using mining algorithms
  3. Learn transition rates from timestamps
  4. Validate via conformance checking and simulation
  5. Predict using ODE simulation with learned parameters
  6. Intervene based on early warnings

The same Petri net formalism handles both directions. A coffee shop model built by hand (Chapter 5) and a hospital process discovered from logs produce the same mathematical object: places, transitions, arcs, rates. The ODE solver doesn’t know or care which one was hand-crafted and which was mined. The incidence matrix, conservation laws, and simulation dynamics work identically.

This is the practical payoff of a universal formalism. Process mining connects Petri nets to the real world — any system that generates event logs can be modeled, analyzed, and predicted.

Zero-Knowledge Proofs

Learning objective: Prove a state transition is valid without revealing the state.

Every Petri net model in this book has been transparent — all token counts visible to all participants. The coffee shop owner sees inventory levels. Both tic-tac-toe players see the board. The hospital monitor sees patient states. But some applications require privacy. A voter should prove they cast a valid ballot without revealing their choice. A financial institution should verify a transfer is valid without exposing account balances. A player should prove their move is legal without revealing their strategy.

Zero-knowledge proofs make this possible. The key insight: a Petri net is already a formal specification of valid state transitions. A ZK circuit turns that specification into a cryptographic proof system. The prover demonstrates they followed the rules; the verifier confirms it without seeing the state.

The Statement Being Proven

Every ZK proof in this system proves one statement:

“I know a valid marking that, when transition T fires, produces the claimed next state.”

More precisely:

  1. The pre-state root (a hash) matches a specific private marking
  2. The post-state root matches a specific private marking
  3. The chosen transition was enabled — all input places had sufficient tokens
  4. The marking change is correct — post = pre + delta, where delta comes from the net topology

The verifier sees only the two state roots and the transition ID. The actual token counts remain hidden.

Public vs. Private

WhatWho Sees It
PublicPre-state root (hash)Everyone
PublicPost-state root (hash)Everyone
PublicTransition IDEveryone
PrivatePre-marking (all token counts)Prover only
PrivatePost-marking (all token counts)Prover only

The state roots are commitments. They bind the prover to a specific marking without revealing it.

State Commitment via MiMC Hashing

The hash function matters. Inside a ZK circuit, every operation becomes arithmetic constraints on a finite field. SHA-256 needs thousands of boolean operations — roughly 25,000 constraints per hash. MiMC (Minimal Multiplicative Complexity) is designed for arithmetic circuits — it operates over the same finite field the circuit uses, costing roughly 300 constraints per hash.

func petriMimcHash(api frontend.API, values []frontend.Variable) frontend.Variable {
    h, _ := mimc.NewMiMC(api)
    for _, v := range values {
        h.Write(v)
    }
    return h.Sum()
}

The state root is:

To recover the actual token counts from a state root, an attacker would need to invert MiMC — computationally infeasible. The hash commits the prover to a specific marking without revealing it.

The Circuit in Five Steps

The gnark circuit for a Petri net transition proof has five steps. Each adds constraints that the prover must satisfy.

Step 1: Hash the Pre-Marking

Compute the MiMC hash of every place’s token count and assert it equals the public pre-state root:

preRoot := petriMimcHash(api, c.PreMarking[:])
api.AssertIsEqual(preRoot, c.PreStateRoot)

This binds the private marking to the public commitment. The prover can’t lie about what state they started from.

Step 2: Hash the Post-Marking

Same check for the post-state:

postRoot := petriMimcHash(api, c.PostMarking[:])
api.AssertIsEqual(postRoot, c.PostStateRoot)

Now both states are committed.

Step 3: Compute the Delta from Topology

From the transition ID, look up the Petri net topology to build a delta vector. The topology is baked into the circuit as constants — the incidence matrix encoded as input/output lists:

for t := 0; t < NumTransitions; t++ {
    isThis := api.IsZero(api.Sub(c.Transition, t))

    for _, p := range Topology[t].Inputs {
        deltas[p] = api.Sub(deltas[p], isThis)
    }
    for _, p := range Topology[t].Outputs {
        deltas[p] = api.Add(deltas[p], isThis)
    }
}

The isThis variable is 1 for the selected transition and 0 for all others. This multiplexes over all transitions without branching — arithmetic circuits can’t branch, so we compute all possibilities and select. Only the chosen transition’s delta contributes to the result.

Step 4: Assert Marking Change

For every place, check that the post-marking equals the pre-marking plus the delta:

for p := 0; p < NumPlaces; p++ {
    expected := api.Add(c.PreMarking[p], deltas[p])
    api.AssertIsEqual(c.PostMarking[p], expected)
}

This ensures the marking changed exactly as the topology dictates. No extra tokens appeared. No tokens vanished. The transition fired correctly.

Step 5: Assert Enabledness

For every input place of the selected transition, the pre-marking must have at least one token:

diff := api.Sub(c.PreMarking[p], isInput)
api.ToBinary(diff, 8)

The trick: ToBinary decomposes a value into bits. If the value were negative (insufficient tokens), it would wrap to a huge field element and the bit decomposition would fail — the proof can’t be generated. This is a standard gnark pattern for range checks: decompose into bits to prove non-negativity.

Topology as Circuit Constants

The most important property of this approach: the Petri net topology is baked into the circuit as constants.

var Topology = [NumTransitions]struct {
    Inputs  []int
    Outputs []int
}{
    {Inputs: []int{0, 1}, Outputs: []int{2}},    // bind
    {Inputs: []int{2}, Outputs: []int{0, 1}},     // unbind
    {Inputs: []int{2}, Outputs: []int{3, 1}},     // catalyze
}

This means:

  1. No application-specific logic in the circuit. The same Define() method works for any Petri net.
  2. Topology changes require recompilation. Adding a place or transition means a new trusted setup.
  3. The circuit is auditable. Anyone can inspect the topology to verify it matches the claimed rules.

The Petri net is the specification. The circuit is the verifier. The proof is the attestation that the specification was followed.

Groth16 Proofs

The proving system is Groth16, implemented by gnark. It requires a one-time trusted setup per circuit but produces:

  • Smallest proofs: ~128 bytes (2 G1 points + 1 G2 point)
  • Fastest verification: ~2ms, constant regardless of circuit size
  • Solidity export: gnark generates a ready-to-deploy verifier contract
PropertyGroth16PLONKSTARKs
Proof size~128 B~400 B~50 KB
Verification~2ms~5ms~50ms
Trusted setupPer-circuitUniversalNone
Quantum-safeNoNoYes

For applications with small circuits and many proofs — like games or token transfers — Groth16’s compact proofs and fast verification dominate.

ZK Tic-Tac-Toe: A Complete Example

The tic-tac-toe model from Chapter 6 has 33 places and 35 transitions. The ZK version uses two circuits, both encoding the full Petri net:

PetriTransitionCircuit proves a move is legal. It’s the general circuit described above — works for any transition in the net. Used for every move.

PetriWinCircuit proves a player has won. The win condition is already in the Petri net topology (win-detection transitions from Chapter 6 fire when three cells align). The circuit checks that the win_x or win_o place has a token:

func (c *PetriWinCircuit) Define(api frontend.API) error {
    root := petriMimcHash(api, c.Marking[:])
    api.AssertIsEqual(root, c.StateRoot)

    winTokens := frontend.Variable(0)
    for p := 0; p < NumPlaces; p++ {
        isWinnerPlace := api.IsZero(api.Sub(c.Winner, p))
        winTokens = api.Add(winTokens, api.Mul(c.Marking[p], isWinnerPlace))
    }
    api.ToBinary(api.Sub(winTokens, 1), 8) // >= 1 token
    return nil
}

Because the game logic lives in the net topology, the circuit doesn’t know anything about tic-tac-toe. Change the topology constants and you get ZK proofs for a different game.

Constraint Counts

For the 33-place, 35-transition tic-tac-toe net:

ComponentConstraintsPurpose
MiMC hash (pre)~300State root verification
MiMC hash (post)~300State root verification
Transition delta~2,300Topology multiplexing
Marking assertion~33post = pre + delta
Enabledness~264Bit decomposition per place
Total~3,200Full transition proof

Proof generation takes ~100ms on commodity hardware. Verification takes ~2ms. The circuit compiles once; proofs are generated per-move.

The Game Protocol

A complete ZK tic-tac-toe game works as:

Initial: state_hash = hash(empty_board)

Round n:
  1. Current player fires transition locally (computes new marking)
  2. Computes new_state_hash = MiMC(new_marking)
  3. Generates Groth16 proof: (old_hash, new_hash, transition_id, proof)
  4. Opponent verifies proof (~2ms)
  5. If valid: state_hash = new_hash, continue
  6. If invalid: cheating detected, game forfeited

The opponent knows which transition fired (the move is public) but doesn’t see the full board state. Win detection is a separate proof demonstrating that a win place has a token.

Beyond Games: Token Transfers

The same circuit structure handles blockchain token transfers. An ERC-20 transfer is a Petri net transition:

  • balances[from] is an input place (tokens consumed)
  • balances[to] is an output place (tokens produced)
  • The guard balances[from] >= amount is the enabledness check

The arcnet project extends this with Merkle trees for account balances, enabling L1-to-L2 bridge verification:

func (c *TransferCircuit) Define(api frontend.API) error {
    // Guard: balance >= amount
    diff := api.Sub(c.BalanceFrom, c.Amount)
    api.ToBinary(diff, 64)

    // Verify Merkle inclusion
    leaf := mimcHash(api, c.From, c.BalanceFrom)
    current := leaf
    for i := 0; i < 20; i++ {
        left := api.Select(c.PathIndices[i], c.PathElements[i], current)
        right := api.Select(c.PathIndices[i], current, c.PathElements[i])
        current = mimcHash(api, left, right)
    }
    api.AssertIsEqual(current, c.PreStateRoot)
    return nil
}

The 20-level Merkle proof means the full state doesn’t need to live on-chain. Only state roots are published. The bridge contract verifies the proof and updates the root.

ApplicationPlacesTransitionEnabledness
Tic-Tac-Toe33 board+game statesPlayer moveCell empty, correct turn
ERC-20 TransferAccount balancesTransferSufficient balance
WorkflowProcess stagesState changeRequired approvals
VotingBallot statesCast voteEligible, hasn’t voted

Same circuit structure. Different topology.

What ZK Doesn’t Hide

A common misconception: ZK doesn’t make the transition ID private. The verifier sees which transition fired. What’s hidden is the state — the full marking of every place.

In tic-tac-toe, the opponent knows a move was made (say, X plays center) but doesn’t see the accumulated pattern of tokens across all 33 places. In a token transfer, the network knows a transfer happened but doesn’t see individual balances.

For applications where even the transition must be hidden, an additional layer of indirection is needed — proving that some valid transition fired without revealing which one. This is possible but increases circuit complexity significantly.

The Proof Pipeline

A complete proof cycle for any Petri net application:

  1. Client fires transition locally (computes new marking)
  2. Client computes MiMC state roots for pre and post markings
  3. Prover receives witness (pre/post markings + transition ID) and generates Groth16 proof
  4. Proof returned as ~128 bytes with public inputs
  5. Anyone can verify the proof in constant time (~2ms)
  6. On-chain verification possible via auto-generated Solidity verifier contract

The circuit doesn’t encode tic-tac-toe, or token transfers, or workflow rules. It encodes Petri net semantics: hash the marking, compute the delta from topology, assert the change, check enabledness. The application logic is in the net. The privacy is in the proof.

One circuit structure. Any Petri net. Provably correct, privately verified.

Topology-Driven Verification

Learning objective: Generate ZK proof circuits automatically from Petri net topology, and verify execution on-chain.

Chapter 12 built ZK circuits by hand — defining the PetriTransitionCircuit step by step, wiring up MiMC hashes and delta computations manually. That was instructive, but nobody should write ZK circuits by hand for every new model. This chapter closes the loop: given a Petri net JSON model, the petrigen compiler produces working gnark circuits, witness generators, and Solidity verifiers automatically. The topology is the circuit. Change the model, recompile, and you have a new proof system.

Along the way, we discover that graph connectivity determines more than circuit structure — it determines strategic value. Rate constants for ODE simulation can be derived directly from the bipartite graph, with no training data and no manual tuning. The classic tic-tac-toe heuristic (center > corner > edge) emerges from counting win-line connections. This is graph theory applied to Petri nets, and it feeds directly into the ZK pipeline.

The Compilation Pipeline

The full path from model to verifiable computation:

┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
│   JSON Model    │────▶│    petrigen     │────▶│  gnark Circuit  │
│  (Petri net)    │     │   (generator)   │     │   (Go code)     │
└─────────────────┘     └─────────────────┘     └─────────────────┘
                                                        │
                                                        ▼
                                               ┌─────────────────┐
                                               │ Groth16/PLONK   │
                                               │    Prover       │
                                               └─────────────────┘
                                                        │
                                                        ▼
                                               ┌─────────────────┐
                                               │ Solidity        │
                                               │ Verifier        │
                                               └─────────────────┘

The petrigen package in go-pflow reads a Petri net model and generates four files:

FilePurpose
petri_state.goPlace/transition constants, topology arrays, marking operations
petri_circuits.goPetriTransitionCircuit and PetriReadCircuit definitions
petri_game.goGame state tracking and witness generation
petri_circuits_test.goCompilation and proof verification tests

No manual circuit writing. The topology encoding — which places connect to which transitions, with what weights — becomes compile-time constant arrays in the generated code. The selector trick from Chapter 12 (evaluating all transitions and using IsZero to select one) is wired up automatically.

A Complete Example: Order Workflow

Consider a simple order processing workflow:

[pending] --approve--> [approved] --ship--> [shipped]
     |
     +----cancel----> [cancelled]

Define it as JSON:

{
  "name": "order-workflow",
  "places": [
    {"id": "pending", "initial": 1},
    {"id": "approved"},
    {"id": "shipped"},
    {"id": "cancelled"}
  ],
  "transitions": [
    {"id": "approve"},
    {"id": "ship"},
    {"id": "cancel"}
  ],
  "arcs": [
    {"from": "pending", "to": "approve"},
    {"from": "approve", "to": "approved"},
    {"from": "approved", "to": "ship"},
    {"from": "ship", "to": "shipped"},
    {"from": "pending", "to": "cancel"},
    {"from": "cancel", "to": "cancelled"}
  ]
}

Generate the circuits:

gen, _ := petrigen.New(petrigen.Options{
    PackageName:  "main",
    OutputDir:    ".",
    IncludeTests: true,
})

files, _ := gen.Generate(&model)
for _, f := range files {
    fmt.Println("Generated:", f.Name)
}

Output:

Generated: petri_state.go
Generated: petri_circuits.go
Generated: petri_game.go
Generated: petri_circuits_test.go

The generated petri_state.go encodes the topology as constants:

const NumPlaces = 4
const NumTransitions = 3

const (
    Pending   = 0
    Approved  = 1
    Shipped   = 2
    Cancelled = 3
)

const (
    Approve = 0
    Ship    = 1
    Cancel  = 2
)

var Topology = [NumTransitions]ArcDef{
    Approve: {Inputs: []int{0}, Outputs: []int{1}},  // pending → approved
    Ship:    {Inputs: []int{1}, Outputs: []int{2}},   // approved → shipped
    Cancel:  {Inputs: []int{0}, Outputs: []int{3}},   // pending → cancelled
}

This is the same pattern from Chapter 12, but generated automatically. The circuit’s Define() method iterates over Topology, computing deltas and enforcing the firing rule. The developer never touches circuit code.

Proving a Transition

With the generated code, proving a valid state transition takes five steps:

// 1. Compile the circuit
ccs, _ := frontend.Compile(ecc.BN254.ScalarField(),
    r1cs.NewBuilder, &PetriTransitionCircuit{})
fmt.Printf("Circuit has %d constraints\n", ccs.GetNbConstraints())

// 2. Trusted setup (one-time per circuit)
pk, vk, _ := groth16.Setup(ccs)

// 3. Fire a transition locally
game := NewPetriGame()
witness, _ := game.FireTransition(Approve)

// 4. Generate proof
assignment := witness.ToPetriTransitionAssignment()
w, _ := frontend.NewWitness(assignment, ecc.BN254.ScalarField())
proof, _ := groth16.Prove(ccs, pk, w)

// 5. Verify
publicWitness, _ := w.Public()
err := groth16.Verify(proof, vk, publicWitness)
// err == nil: proof valid

The verifier sees three public inputs — PreStateRoot, PostStateRoot, and Transition — and confirms the state changed correctly. The actual marking (which places have tokens) stays private.

For the order workflow, this circuit has roughly 1,800 R1CS constraints. Proof generation takes ~100ms. Verification takes ~2ms. The proof is 192 bytes.

Invalid Transitions Are Unprovable

What happens if someone tries to ship an order that hasn’t been approved?

game := NewPetriGame()  // Initial: pending=1
_, err := game.FireTransition(Ship)
// err: "transition ship is not enabled"

The FireTransition function checks enabledness before generating a witness. But even if someone bypasses this check and constructs a malicious witness, the circuit’s bit decomposition will fail — PreMarking[Approved] is 0, and 0 - 1 wraps to a huge field element that can’t be decomposed into 8 bits. The proof literally cannot be generated. Soundness comes from the circuit, not from the application code.

From Graph Connectivity to Rate Constants

Chapter 3 introduced mass-action kinetics with hand-tuned rate constants. Chapter 6 used them for tic-tac-toe heatmaps. But where do rate constants come from?

For chemical reactions, rates encode physical properties — activation energies, temperature dependence, catalytic effects. You measure them in a lab. For games and workflows, there’s no lab. The rate constants encode strategic value, and that value can be derived from the graph itself.

The Algorithm

The rate auto-derivation algorithm operates on the bipartite directed graph of the Petri net:

  1. Identify candidate transitions (the moves a player can make) and target transitions (the goals — win conditions, completion states)
  2. For each candidate, find its unique output places — places it produces that no other candidate produces
  3. Count how many target transitions take those unique places as inputs
  4. That count is the rate constant

Tic-Tac-Toe: Strategy from Topology

The TTT Petri net from Chapter 6 has 33 places and 35 transitions. Each play transition (like x_play_11) produces a piece at a position (like x11). Each win transition (like x_win_diag) requires three specific pieces as inputs.

Apply the algorithm to the center position:

x_play_11 outputs → {x11, o_turn, move_tokens}

Unique outputs (only x_play_11 produces x11):
  x11

Win transitions with x11 as input:
  x_win_row1  (middle row)    ✓
  x_win_col1  (center column) ✓
  x_win_diag  (main diagonal) ✓
  x_win_anti  (anti-diagonal) ✓

Rate for x_play_11 = 4  (center: 4 win lines)

Compare with a corner:

x_play_00 outputs → {x00, o_turn, move_tokens}

Unique outputs: x00

Win transitions with x00 as input:
  x_win_row0  ✓
  x_win_col0  ✓
  x_win_diag  ✓

Rate for x_play_00 = 3  (corner: 3 win lines)

And an edge:

x_play_01 outputs → {x01, o_turn, move_tokens}

Unique outputs: x01

Win transitions with x01 as input:
  x_win_row0  ✓
  x_win_col1  ✓

Rate for x_play_01 = 2  (edge: 2 win lines)

Center > corner > edge. The classic tic-tac-toe strategy emerges from counting graph connections. No game theory, no training data, no heuristics — just topology.

Filtering Shared Places

The algorithm has one non-obvious step: filtering shared output places. Without it, the results are useless.

Every x_play_* transition produces three output places: the piece (e.g., x00), the opponent’s turn token (o_turn), and a move counter (move_tokens). The piece place is unique to that candidate — only x_play_00 produces x00. But o_turn and move_tokens are produced by all 9 x-play transitions identically.

The problem: o_turn is an input to every x_win_* transition (win detection happens on the opponent’s turn). If you include o_turn in the connectivity count, every candidate connects to all 8 win targets through it, giving every position rate=8. The heatmap collapses to a flat field — center, corner, and edge become indistinguishable.

The fix: exclude any output place produced by more than one candidate. A shared place carries no discriminative signal — it’s the DC component that shifts every candidate equally. Only places unique to a single candidate can distinguish one candidate from another. This is analogous to mean-centering features before computing distances: the shared component must be subtracted before differences become meaningful.

It’s Graph Theory, Not Petri Net Theory

The rate auto-derivation is pure graph theory. It operates on a bipartite directed graph — nodes are candidates and targets, edges pass through output places — and computes degree centrality of candidates with respect to targets through unique edges. Nothing about the algorithm requires Petri net firing semantics, token counts, or conservation laws.

The algorithm is structurally similar to a single message-passing step in a graph neural network:

  • GNN: node_embedding = aggregate(neighbor_features)
  • Petri net: rate[candidate] = count(reachable_targets through unique_outputs)

The difference: GNNs learn the aggregation function from data. The Petri net version uses a fixed, interpretable aggregation (count of target connections). No training needed.

What the Petri net formalism adds comes in layers above the rate derivation:

  1. Firing semantics. Transitions consume and produce tokens atomically. This gives you state machines that pure graph connectivity cannot express.
  2. Mass-action convention. The rate formula couples topology-derived weights to the current state.
  3. Conservation laws. P-invariants constrain the state space with guarantees that graph centrality alone cannot provide.
  4. ZK circuit structure. The stoichiometry matrix defines the gnark circuit constraints directly.

The rate derivation discovers how much each candidate matters. The Petri net machinery determines what happens when you act on that knowledge.

Poker Hand Ranking: Integer Reduction

The tic-tac-toe example derives strategic value from outgoing connectivity — more connections to win transitions means higher value. But the same mechanism works in reverse. In poker, we want to rank hand categories by rarity. Rare hands should have high value; common hands, low.

The insight: encode combinatorial frequency as structural outflow. Build an analysis net where each hand category gets:

  • A source place src_H (1 token) — constant inflow via catalytic arc
  • A value place val_H (0 tokens) — accumulation target
  • A play transition play_H — moves tokens from source to value, returning the source token
  • N drain transitions — consume tokens from val_H at a rate proportional to hand frequency

The drain counts are log-scaled from actual 5-card combination counts:

Hand5-Card CombosDrains
Straight Flush401
Four of a Kind6242
Full House3,7444
Flush5,1085
Straight10,2008
Three of a Kind54,91212
Two Pair123,55216
One Pair1,098,24024
High Card1,302,54032

The full net has 18 places and 113 transitions (9 play + 104 drain).

At equilibrium under mass-action kinetics, inflow equals outflow for each value place:

Since src_H is catalytic (always 1) and rates are uniform:

Rare hands have fewer drains, accumulate more tokens, and produce higher equilibrium values. Common hands drain fast, accumulate little, and score low. Running the ODE with uniform rates to equilibrium yields:

Straight Flush:  32.0  (1 drain)
Four of a Kind:  16.0  (2 drains)
Full House:       8.0  (4 drains)
Flush:            6.4  (5 drains)
Straight:         4.0  (8 drains)
Three of a Kind:  2.7  (12 drains)
Two Pair:         2.0  (16 drains)
One Pair:         1.3  (24 drains)
High Card:        1.0  (32 drains)

The entire poker hand hierarchy emerges from topology. No poker knowledge is injected beyond the frequency encoding in drain counts.

TTT vs. Poker: Same Mechanism, Inverted Reading

In tic-tac-toe, more connections to win transitions means higher strategic value. The raw ODE concentration increases with connectivity — center accumulates tokens fastest because it connects to the most win lines.

In poker, fewer drain connections means higher hand value. The raw ODE concentration increases with fewer drains — straight flush accumulates tokens fastest because it has the fewest sinks.

Both cases recover integers (or integer-like ratios) from topology via ODE equilibrium. The structural principle is identical: differential outflow creates differential accumulation, and the equilibrium concentrations encode the ranking.

This also addresses the boundary question from Chapter 15. The exponential weights approach bolted scoring onto the net as bookkeeping — 52 extra transitions producing a number that nothing in the net reads. Integer reduction derives the scoring from the net. The drain transitions aren’t bookkeeping; they’re the mechanism that produces the ranking. The topology speaks.

The next chapter completes the pipeline: compiling guards and state invariants into ZK circuits, committing state with Merkle trees, and verifying proofs on-chain via smart contracts.

On-Chain ZK Verification

Learning objective: Compile guards and state invariants into ZK circuits, commit state with Merkle trees, and verify Petri net execution on-chain.

Chapter 13 showed how petrigen compiles Petri net topology into gnark circuits and how graph connectivity determines strategic value. This chapter completes the pipeline: guards become arithmetic constraints, state is committed via Merkle trees, invariants are enforced cryptographically, and proofs are verified on-chain by smart contracts.

Guard Compilation

Chapter 4 introduced guards — boolean expressions that add conditions beyond simple token availability. A guard like balances[from] >= amount on a transfer transition means: even if the input places have tokens, the transition only fires when the guard is satisfied.

In a ZK circuit, guards become arithmetic constraints. The GuardCompiler in go-pflow transforms guard expressions into the constraint IR that the gnark code generator consumes.

Boolean Logic as Field Arithmetic

ZK circuits operate over a finite field. There are no if statements, no booleans, no branches. Everything is multiplication and addition over integers mod a prime. The compiler transforms each logical operation:

AND: A && B becomes A * B. Both must be 1 for the product to be 1.

OR: A || B becomes A + B - A * B. At least one must be 1.

NOT: !A becomes 1 - A. The operand must be boolean (0 or 1).

Comparisons as Range Checks

The more subtle transformations handle comparisons:

Greater-or-equal: left >= right becomes:

  1. Compute diff = left - right
  2. Assert diff is non-negative via bit decomposition

Non-negativity in a finite field is tricky — every field element is technically positive. The trick from Chapter 12 applies: decompose diff into bits. If the original value was negative, it wraps to a huge field element that won’t fit in the expected bit width, and the decomposition fails.

Strict greater-than: left > right reduces to left - right - 1 >= 0.

Equality: left == right is a direct AssertIsEqual constraint.

Inequality: left != right requires proving the inverse exists. The compiler introduces a witness inv and constrains (left - right) * inv = 1. This is satisfiable only when left - right is non-zero — zero has no multiplicative inverse in a field.

State Access and Merkle Proofs

Guards often reference state: balances[alice] >= amount. The compileIndexExpr function handles map lookups by registering a state read. Each state read becomes a Merkle proof obligation — the prover must demonstrate that the claimed value actually exists in the committed state tree.

For nested maps like allowances[owner][spender], the compiler generates composite key hashing: key = Poseidon(owner, spender). The resulting key looks up a single leaf in the Merkle tree.

func (c *GuardCompiler) compileIndexExpr(idx *guard.IndexExpr) *Expr {
    // Check for nested index (e.g., allowances[owner][spender])
    if innerIdx, ok := idx.Object.(*guard.IndexExpr); ok {
        return c.compileNestedIndex(innerIdx, idx.Index)
    }

    // Simple index: place[key]
    placeIdent, _ := idx.Object.(*guard.Identifier)
    keyIdent, _ := idx.Index.(*guard.Identifier)

    // Register state read — triggers Merkle proof generation
    witness := c.witnesses.AddStateRead(placeIdent.Name, []string{keyIdent.Name})
    return VarExpr(witness.Name)
}

The guard compiler doesn’t verify Merkle proofs itself. It registers what state needs to be proven, and the pipeline’s Merkle proof compiler generates the verification constraints separately.

Merkle State Commitment

Chapter 12 committed state with a flat MiMC hash: feed all place values into a single hash and publish the result. This works for small nets — the tic-tac-toe model has 33 places, and hashing 33 values costs ~300 constraints.

But real applications have thousands or millions of state entries. An ERC-20 token has one balance per address. A workflow system has one case state per process instance. Flat hashing doesn’t scale.

From Flat Hash to Merkle Tree

A Merkle tree organizes state as leaves and hashes pairs up to a single root:

            root
           /    \
        h01      h23
       /   \    /   \
     h0    h1  h2    h3
     |     |   |     |
   leaf0 leaf1 leaf2 leaf3

To prove a single value, the prover provides the leaf and the sibling hashes along the path to the root — hashes instead of . A depth-20 tree supports (~1 million) leaves with only 20 hash operations per proof.

Poseidon vs. MiMC

Chapter 12 used MiMC for state commitment — ~300 constraints per hash. The Merkle tree compiler uses Poseidon, another ZK-friendly hash function, at ~182 constraints per hash. Over 20 levels of tree traversal, this saves roughly 2,400 constraints per Merkle proof.

Verification in the Circuit

The Merkle proof verification algorithm in the circuit:

  1. Compute the leaf hash: leaf = Poseidon(key, value)
  2. Walk up the tree: at each level, select left/right based on the path index, then hash the pair
  3. Assert the computed root equals the committed state root

The path selection uses the same no-branching pattern as transition selection — arithmetic multiplexing:

left  = pathIndex * sibling + (1 - pathIndex) * current
right = pathIndex * current + (1 - pathIndex) * sibling
hash  = Poseidon(left, right)

Each Merkle proof adds approximately constraints (20 levels, 2 operations per level, 182 constraints per Poseidon hash). A transition with two state accesses (sender balance, receiver balance) adds ~14,560 constraints for Merkle proofs alone. This is the cost of scalable state — more constraints per proof, but the state tree can hold millions of entries without growing the circuit further.

Nested Maps

For data structures like allowances[owner][spender], the compiler hashes the key pair into a single composite key:

compositeKey = Poseidon(owner, spender)
leaf = Poseidon(compositeKey, value)

This flattens nested maps into a single Merkle tree. The prover includes the raw keys and the composite key hash in the witness, and the circuit verifies the key composition.

State Invariants in Circuits

Chapter 2 introduced P-invariants — weighted sums of places that remain constant across all firings. For the enzyme kinetics model, the total of substrate plus complex plus product is conserved: tokens are neither created nor destroyed, only rearranged.

P-invariants are powerful for offline analysis. But in a ZK context, they become something stronger: constraints compiled directly into the proof circuit. A violation isn’t just detectable — it’s unprovable. No valid proof can exist for a state transition that breaks an invariant.

Conservation Laws

The InvariantCompiler generates constraints for conservation laws. Consider an ERC-20 token with the invariant sum(balances) == totalSupply. In a ZK circuit, we can’t iterate over all balances — the circuit doesn’t know how many accounts exist. Instead, we verify the invariant differentially:

Untouched balances don’t change, so their deltas are zero. The circuit only needs to verify that the changes to touched balances sum to the change in total supply:

func (c *InvariantCompiler) CompileConservation(
    sumPlace string,
    totalPlace string,
    transitions []StateTransition,
) []*Constraint {
    // Sum deltas for the summed place (e.g., balances)
    var sumDeltaExpr *Expr
    for _, t := range transitions {
        if t.Place == sumPlace {
            delta := SubExpr(VarExpr(t.PostVar), VarExpr(t.PreVar))
            if sumDeltaExpr == nil {
                sumDeltaExpr = delta
            } else {
                sumDeltaExpr = AddExpr(sumDeltaExpr, delta)
            }
        }
    }

    // Conservation: delta(total) == sum(deltas)
    return []*Constraint{{
        Type:  Equal,
        Left:  totalDeltaExpr,
        Right: sumDeltaExpr,
        Tag:   "conservation: delta(totalSupply) == sum(delta(balances))",
    }}
}

This is the state equation from Chapter 2 () enforced cryptographically. The incidence matrix determines the deltas; the invariant compiler verifies they satisfy the conservation law.

Non-Negativity

Token counts can’t go negative. The CompileNonNegative function adds range checks on every touched place’s post-transition value:

func (c *InvariantCompiler) CompileNonNegative(
    place string,
    transitions []StateTransition,
) []*Constraint {
    var constraints []*Constraint
    for _, t := range transitions {
        if t.Place == place {
            constraints = append(constraints,
                RangeConstraint(VarExpr(t.PostVar),
                    fmt.Sprintf("%s >= 0", place)))
        }
    }
    return constraints
}

In the circuit, this becomes a bit decomposition — the same trick Chapter 12 used for enabledness checks. If a balance would go negative, the field element wraps to a huge value that fails the range check.

Boundedness

Some places have capacity limits. A voting system might limit each voter to one ballot. The CompileBounded function constrains post-values to not exceed a maximum:

// max - post >= 0
diff := SubExpr(ConstExpr(maxValue), VarExpr(t.PostVar))
constraints = append(constraints,
    EqualConstraint(VarExpr(diffWitness), diff, "bounded"),
    RangeConstraint(VarExpr(diffWitness), "bounded check"),
)

From Schema to Constraints

The invariant compiler reads constraints directly from the model schema:

sum(balances) == totalSupply    → Conservation constraint
balances >= 0                   → Non-negative constraint
votes <= maxVotes               → Bounded constraint

Each constraint adds a small number of R1CS constraints to the circuit. The cost is minimal — typically 1-3 constraints per invariant — but the guarantee is absolute. A prover who violates any invariant simply cannot generate a valid proof.

On-Chain Verification

Generating proofs locally is useful for peer-to-peer verification — one player proves to another that a move is valid. But for applications with shared state — token transfers, governance, auctions — the proof needs to be verified where the state lives: on a blockchain.

State Root Chaining

The ZkOde smart contract maintains a chain of state roots. Each proof’s post-state root must equal the next proof’s pre-state root, creating an unbroken chain from genesis to the current state:

Genesis: stateRoot₀ = MiMC(initial_marking)

Step 1: proof₁ proves stateRoot₀ → stateRoot₁
Step 2: proof₂ proves stateRoot₁ → stateRoot₂
  ...
Step n: proofₙ proves stateRootₙ₋₁ → stateRootₙ

The contract stores the current state root and rejects any proof whose pre-state root doesn’t match:

if (preRoot != currentStateRoot) {
    revert InvalidStateChain(currentStateRoot, preRoot);
}

This ensures no steps are skipped, replayed, or reordered. The on-chain state root is the single source of truth.

The ZkOde Contract

The core contract is straightforward:

contract ZkOde {
    IVerifier public verifier;
    uint256 public currentStateRoot;
    uint256 public stepCount;
    uint256 public immutable numTransitions;
    bool public enforceOptimal;

    struct Step {
        uint256 preRoot;
        uint256 postRoot;
        uint256 nextRoot;
        uint256 stepSize;
        uint256 chosenTransition;
        uint256 timestamp;
    }

    mapping(uint256 => Step) public steps;
}

The verifier is a gnark-generated Groth16 verifier contract — exported directly from the proving system. The submitStep function accepts a proof, verifies it, and advances the state root:

function submitStep(
    uint256[8] calldata proof,
    uint256[] calldata publicInputs,
    uint256 chosenTransition,
    uint256 nextStateRoot
) external onlyProver {
    // Validate state chain continuity
    require(publicInputs[0] == currentStateRoot);

    // Verify Groth16 proof
    uint256[2] memory a = [proof[0], proof[1]];
    uint256[2][2] memory b = [[proof[2], proof[3]],
                               [proof[4], proof[5]]];
    uint256[2] memory c = [proof[6], proof[7]];
    require(verifier.verifyProof(a, b, c, publicInputs));

    // Record step and advance state
    currentStateRoot = nextStateRoot;
    stepCount++;
}

Optimal Play Enforcement

For game applications, the contract can enforce that the chosen transition has the highest rate among all available transitions. The public inputs include rate values for every transition (derived from the ODE integration), and the contract checks that no other transition has a higher rate:

if (enforceOptimal) {
    uint256 chosenRate = publicInputs[3 + chosenTransition];
    for (uint256 t = 0; t < numTransitions; t++) {
        uint256 rate = publicInputs[3 + t];
        if (rate > chosenRate) {
            revert NotOptimalPlay(
                chosenTransition, chosenRate, t, rate);
        }
    }
}

This is a remarkable property: the contract doesn’t know the game rules, doesn’t know what tic-tac-toe is, doesn’t store the board state. It only knows the topology-derived rates. And yet it enforces optimal play — because the rates encode strategic value, and strategic value comes from the topology, and the topology is the game rules.

Batch Submission

For applications with many steps per session (a full game, a sequence of workflow transitions), submitting proofs one at a time wastes gas on per-transaction overhead. The submitBatchSteps function accepts arrays of proofs and processes them sequentially:

function submitBatchSteps(
    uint256[8][] calldata proofs,
    uint256[][] calldata publicInputsBatch,
    uint256[] calldata chosenTransitions,
    uint256[] calldata nextStateRoots
) external onlyProver {
    for (uint256 i = 0; i < proofs.length; i++) {
        _verifyAndRecord(
            proofs[i], publicInputsBatch[i],
            chosenTransitions[i], nextStateRoots[i]);
    }
}

Gas Costs

OperationGas Cost
Groth16 verification~200,000
State root storage~20,000
Total per transition~220,000

At current L2 gas prices, this is roughly $0.01–$2 per verified transition, depending on the chain. A full 9-move tic-tac-toe game costs under $1 on Base.

Deployed Contracts

The system is deployed on Base Sepolia (testnet) with two configurations:

DeploymentVerifierZkOdeTransitionsOptimal
Cascade (3-place)0xA675...0x2084...2No
TTT Heatmap (33-place)0x97a6...0xF5d9...9Yes

The TTT deployment enforces optimal play — the contract rejects any move that doesn’t have the highest topology-derived rate. Combined with the ZK proof of correct state transition, this creates a fully trustless game: no server, no referee, no possibility of cheating.

ZK Hold’em: Topology-Derived Payouts

The poker hand analysis net from Chapter 13 produces hand strength values purely from topology. These values transfer directly to a game contract as payout multipliers.

The ZKHoldem contract uses the same Groth16 verifier pattern as ZkOde, but adds three poker-specific mechanisms:

Commit-reveal shuffle. Before the game starts, the house publishes Poseidon(seed) as a binding commitment. The deck is shuffled via Fisher-Yates with the seed as a deterministic PRNG. At showdown, the house reveals the seed. Anyone can verify Poseidon(seed) == commitment and derive the cards independently. If the house doesn’t reveal within a timeout window, the player claims the pot.

State root chaining per action. Each game action — deal, check, bet, call, fold — fires a transition on a game net (18 places, 16 transitions) and produces a Groth16 proof. The post-state Poseidon hash of one action becomes the pre-state hash of the next, forming an unbroken proof chain.

Topology-derived bonus payouts. The winner takes the pot plus a bonus proportional to the integer reduction value of their hand:

uint256[9] private HAND_STRENGTH = [
    uint256(1), 1, 2, 3, 4, 6, 8, 16, 32
];
// Index: 0=HC, 1=Pair, 2=2P, 3=3K, 4=Str, 5=Flush, 6=FH, 7=4K, 8=SF

bonus = HAND_STRENGTH[rank] * ante;

A straight flush win pays 32x the ante as bonus. A pair win pays 1x. These multipliers aren’t tuned by a game designer — they’re the equilibrium concentrations from the analysis net, normalized and rounded. The house edge is transparent because the payout structure is derived from the same topology that defines the game.

The house AI is deterministic: it evaluates its hand against the topology-derived strength values and applies a fixed strategy based on hand strength and pot odds. This determinism is enforceable on-chain — the contract can verify that the house played according to the strategy given its cards, just as the TTT contract enforces optimal play.

Combinatorial vs. Continuous

The compilation pipeline supports two fundamentally different modes, and choosing the right one matters for both the circuit design and the interpretation of results.

Combinatorial Mode

Games, workflows, governance, token standards.

The state space is finite and discrete. Tic-tac-toe has at most 5,478 reachable board states. A workflow has a bounded number of case states. An ERC-20 has integer balances.

In these systems, the topology and discrete scoring are sufficient. The ODE provides a continuous visualization of token flow — useful for intuition, beautiful as a heatmap — but when it comes time to decide, you discretize. The heatmap scoring that drives move selection in tic-tac-toe is discrete: count win lines, evaluate threats, pick the best move. There is no continuous quantity being modeled.

The “mass-action kinetics” framing is a useful metaphor that lets you apply ODE machinery, but the strategic information is graph-theoretic.

What the ZK circuit proves: the discrete state transition was valid — correct inputs consumed, correct outputs produced, guard conditions met. The ODE step is optional — it enriches the public output with heatmap scores but isn’t necessary for correctness.

Rate derivation: auto-derive from topology. Degree centrality captures the essential structure.

Continuous Mode

Chemical kinetics, population ecology, epidemiology, economic models.

The state genuinely evolves in continuous time. Concentrations rise and fall. Populations oscillate. Epidemics peak and decay. The trajectory carries information — you care about when concentrations peak, in what order, and how fast.

In these systems, the topology tells you what can happen, but the ODE tells you what happens first. A chemical cascade where reaction A peaks before reaction B produces a different product mix than one where B peaks first, even if the topology is identical.

What the ZK circuit proves: the ODE integration was computed correctly. The 7-stage Tsit5 step, the mass-action rates, the stoichiometry-weighted derivatives — all verified. This is where the ZK ODE machinery earns its keep.

Rate derivation: bring domain-specific rates. They encode physical properties (activation energies, birth/death rates, transmission coefficients) that topology cannot capture.

Fixed-Point Arithmetic for Continuous Mode

ZK circuits operate over finite fields — integers mod a prime. There are no floating-point numbers. The system uses fixed-point arithmetic with a scale factor over the BN254 scalar field:

This gives 18 decimal digits of precision — more than enough for ODE integration — using only integer operations that ZK circuits handle natively.

The Pipeline Supports Both

CombinatorialContinuous
RatesAuto-derived from topologySpecified in model
ODE stepOptional (enriches output)Essential (proves trajectory)
ScoringDiscrete win/block detectionRate-weighted heatmap
Circuit focusState transition validityIntegration correctness
ExamplesTTT, Hold’em, workflows, ERC-20Cascade reactions, SIR epidemics

The zkgen compiler selects the appropriate circuit template based on the model configuration.

Putting It Together

The compilation pipeline from this chapter and Chapter 13, combined with the circuit fundamentals from Chapter 12, creates a complete system for verifiable Petri net execution:

  1. Define the model in JSON — places, transitions, arcs, guards, invariants
  2. Generate circuits with petrigen — topology becomes constants, guards become constraints, invariants become range checks
  3. Derive rates from graph connectivity — or specify domain-specific rates
  4. Prove transitions locally — the prover generates a Groth16 proof that the state changed correctly
  5. Verify anywhere — peers verify in 2ms, smart contracts verify on-chain for ~220,000 gas

The model is the specification. The compiler generates the verifier. The proof is the attestation that the specification was followed.

The technique generalizes. Two games now demonstrate topology-derived strategic values:

GameWhat Reduction RecoversNet Size
Tic-Tac-ToePosition value: center=4, corner=3, edge=218p x 33t
Hold’emHand ranking: SF=32, 4K=16, …, HC=118p x 113t

In both cases, ODE simulation with uniform rates extracts integers (or integer-like ratios) that encode strategic value. No game-specific heuristics. No training data. Just topology. The same proof system — Groth16 over BN254, Poseidon state hashing, stoichiometry-based constraints — covers both. The circuit doesn’t know what game it’s proving. It only knows the topology.

One circuit structure. Any Petri net. Automatically generated, topology-driven, cryptographically verified.

Exponential Weights and Scoring Systems

Learning objective: Design scoring systems using power-of-2 encoding in net structure.

Chapters 5 through 10 modeled diverse systems with Petri nets. Chapter 11 extracted models from data. Chapter 12 added privacy through zero-knowledge proofs. This chapter examines a technique that sits at the boundary between what a Petri net should model internally and what should be delegated to external systems: encoding priority rankings as exponential arc weights.

The technique is sound — power-of-2 weights encode lexicographic order as a single integer. But applying it to poker hand scoring inside the Petri net revealed an important design principle: the boundary between the net and the world matters. This chapter tells the story of what worked, what didn’t, and what the failure teaches about modeling decisions.

Binary Dominance

The core idea: assign power-of-2 weights so that any single higher-priority item outweighs all lower-priority items combined. This is the same principle behind bitmasks, Unix file permissions (read=4, write=2, execute=1), and binary-encoded feature flags.

For poker card ranking, each rank gets a power of 2:

RankRank Power
A4096 ()
K2048 ()
Q1024 ()
J512 ()
T256 ()
9128 ()
21 ()

With suit tiebreakers (spade=3, heart=2, diamond=1, club=0), the formula is:

Every card maps to a unique integer: A-spade = 16387, A-heart = 16386, down to 2-club = 4.

The key property: binary dominance. A single King () outscores all cards Queen and below combined. This means you can sum the weights for any set of cards and the result preserves lexicographic comparison. No sorting needed — just a number.

Why Linear Weights Fail

Naive linear weights (A=13, K=12, Q=11, …) break this property. A hand with {A, K, 5, 4, 3} scores . A hand with {A, Q, J, T, 9} scores . Linear scoring says the second hand wins. Poker says the King beats the Queen — the first hand wins.

Exponential weights fix this by construction. With power-of-2 encoding, the King’s contribution dominates everything below it, so the first hand’s score is necessarily higher than any hand where a Queen is the second-highest card.

The Score Is the State

There’s a deeper property. When each item gets weight , the sum of any subset is unique — that’s binary representation. But consider what this means for Petri nets.

A hand with {K, T, 5, 2}, using rank powers:

K = 2^11 = 2048
T = 2^8  = 256
5 = 2^3  = 8
2 = 2^0  = 1
---------------
Sum = 2313

Now write 2313 in binary: 100100001001. Each bit position maps to a rank. The bits are the places. The 1s are the tokens. The binary representation of the score is the Petri net marking.

This isn’t a decoding trick — it’s the same structure viewed two different ways. A Petri net marking is a vector of token counts across places. When every place holds 0 or 1 tokens, that vector is a bit string. And a bit string is a binary number. So the accumulated score, the bit string, and the marking are three notations for the same mathematical object.

The constraint: this works for set membership — “which cards are in this hand?” — where each place holds at most one token. If a place could hold 2 or more tokens, a single bit can’t represent it. But for binary-state places, the number 2313 doesn’t just rank the hand. It is the hand.

What We Built (And Removed)

We encoded this in the poker hand Petri net:

  1. A kicker_score place that accumulated the total weight (initial = 0)
  2. 52 detection transitions (hc_A-heart, hc_K-spade, …), one per card
  3. Consuming input arcs from each card place to its detection transition
  4. Weighted output arcs from each detection transition to kicker_score

When a card was in the hand, its place had a token, enabling the corresponding transition. It fired once — consuming the card token — and deposited the card’s universal weight into kicker_score. Cards not in the hand never fired. The consuming arcs made each transition self-limiting.

Mechanically, it worked. The transitions fired, tokens accumulated, and the resulting kicker_score correctly ranked hands. Tests passed. You could look at two hands’ kicker scores and determine the winner.

Then we removed it.

Why It Was Wrong

The poker hand model detects pairs, straights, and flushes through structural properties of the net — token patterns across places that enable or inhibit transitions. A pair is detected because two card tokens enable a pair transition. A flush is detected because five suited tokens enable a flush transition. The net does the classification.

Kicker scoring doesn’t work this way. The kicker_score place just accumulates a number. Nothing in the net reads that number. No transition is enabled or disabled by it. No arc weight depends on it. The model needs a separate interpreter to extract meaning from the token count — you have to decompose the sum back into powers of 2 to recover which cards contributed.

That’s not modeling. That’s bookkeeping bolted onto the side.

A Petri net model should be self-describing: the structure of places, transitions, and arcs encodes the rules. If you need a decoder ring to read meaning out of a token count, you’re not modeling the domain in the net — you’re using the net as a storage medium for an external computation. The 52 extra transitions and 104 extra arcs added complexity without adding behavioral insight.

The Boundary Between Net and External Logic

The lesson is about where the model ends and the world begins.

Petri nets are good at modeling concurrent, discrete behavior through structure. Places represent state. Transitions represent events. Arcs define preconditions and effects. The topology is the logic. When you need to add behavior, the right instinct is to add structure — new places, new transitions, new arcs that encode the rules.

But encoding isn’t behavior. Assigning clever weights to arcs doesn’t make the net do anything new — it makes the net store something for someone else to read. That’s fine, as long as you’re clear about the boundary.

When Exponential Weights Belong

The encoding is valuable when the consumer is external and the Petri net is generating output:

Event-sourced state. In an event-sourced system built on a Petri net, transitions emit events and external projections interpret them. If a projection needs to rank items by priority, the net can deposit exponentially-weighted tokens into an output place. The projection reads the accumulated value and uses it directly for sorting. The net produces the value; the projection consumes it. Each side does what it’s good at.

Multi-criteria scoring. When criteria have strict priority order (safety > performance > cost), exponential weights encode the hierarchy. A workflow net accumulates scores as items pass through evaluation stages. The final score in an output place encodes the full priority ranking as a single integer. An external dashboard reads it without replaying the evaluation logic.

Compact set representation. The score in binary is the place vector. This makes power-of-2 sums useful whenever a Petri net needs to communicate which subset of items was selected to an external consumer. One place, one integer, full recovery. The external system reads the bits to reconstruct the set without replaying any transitions.

When They Don’t Belong

The encoding fails when the net itself needs to read the result:

  • No transition depends on the accumulated score. The number sits in a place that nothing fires from.
  • No guard references the score. No conditional behavior changes based on the value.
  • The interpretation requires external logic. You need a separate decoder to extract meaning.

If the only consumer of a token count is outside the net, the encoding adds structural complexity (more transitions, more arcs) without adding behavioral capability. The net is no more powerful with the encoding than without it.

The General Principle

The poker experiment reveals a design heuristic for Petri net modeling:

Ask: does any transition use this information?

If the answer is yes — if a place’s token count enables, disables, or influences a transition — then encode it in the net. That’s what Petri nets are for.

If the answer is no — if the information is only consumed by external systems — then the net is the wrong place for the computation. Let the net generate the raw events. Let the external system do the interpretation. Don’t add 52 transitions and 104 arcs to encode something that a simple function could compute from the event stream.

This boundary applies beyond exponential weights. Any time you’re tempted to add net structure that doesn’t participate in the net’s behavior — logging places, summary counters, encoding tricks — ask whether the information is being modeled or merely stored. Models live in the net. Storage belongs outside.

The exponential encoding itself is genuinely useful. It maps sets to integers, preserves lexicographic ordering, and enables constant-time comparison. The mistake was embedding it in the net instead of applying it at the boundary where the net’s output meets the world.

There is, however, a way to derive poker hand rankings from net topology rather than bolting them on. The integer reduction technique from Chapter 13 builds a hand analysis net where drain transitions proportional to combinatorial frequency create differential outflow. At ODE equilibrium, rare hands accumulate more tokens and produce higher values — straight flush=32, four of a kind=16, down to high card=1. The ranking emerges from topology, and the drain transitions are the mechanism, not bookkeeping. See “Poker Hand Ranking: Integer Reduction” in Chapter 13 for the full treatment.

Declarative Infrastructure

Learning objective: Use JSON-LD and semantic vocabularies to make nets self-describing and composable.

The previous chapters built models, analyzed them, and even proved their transitions in zero knowledge. But how do those models travel between systems? How does a Petri net created in the browser editor reach the Go solver? How does a model authored in 2024 remain valid when the schema grows in 2026?

The answer is JSON-LD — a serialization format that is purely declarative and monotonically expansive. These two properties, more than any feature list, make it reliable infrastructure for composable systems. This chapter explains why, using the three vocabularies that the pflow ecosystem actually uses in production.

Three Contexts, One Discipline

The pflow ecosystem uses three JSON-LD vocabularies. Each serves a different purpose. All share the same envelope.

Petri net model (pflow.xyz editor):

{
  "@context": "https://pflow.xyz/schema",
  "@type": "PetriNet",
  "places": {
    "Idle": { "@type": "Place", "initial": [1], "capacity": [1] }
  },
  "transitions": {
    "Brew": { "@type": "Transition" }
  },
  "arcs": [
    { "@type": "Arrow", "source": "Idle", "target": "Brew", "weight": [1] }
  ]
}

Blog metadata (schema.org):

{
  "@context": "https://schema.org",
  "@type": "Article",
  "headline": "JSON-LD as Declarative Infrastructure",
  "author": { "@type": "Person", "name": "stackdump" },
  "datePublished": "2026-02-15T00:00:00Z"
}

ActivityPub actor (federation):

{
  "@context": [
    "https://www.w3.org/ns/activitystreams",
    "https://w3id.org/security/v1"
  ],
  "type": "Person",
  "preferredUsername": "myork",
  "inbox": "https://blog.stackdump.com/users/myork/inbox",
  "publicKey": { "id": "...#main-key", "publicKeyPem": "..." }
}

Each snippet is self-describing. Each links to a vocabulary that defines its terms. And none of them contain instructions — they are pure assertions.

Purely Declarative

A JSON-LD document is a serialized RDF graph: a set of subject-predicate-object triples. It makes statements about the world but never tells you what to do with them. There are no callbacks, no event handlers, no conditionally-included fields.

This matters for interoperability. The same .jsonld Petri net file is consumed by at least three independent interpreters:

  1. JavaScript — the pflow.xyz browser editor loads it, renders the net, and runs simulations
  2. Go parser — go-pflow reads the same file for ODE-based analysis and validation
  3. Go code generator — petri-pilot compiles it into executable service modules

None of these consumers coordinate with each other. They don’t need to. The file is assertions, not a protocol. Each consumer extracts the triples it understands and ignores the rest. The code generator doesn’t care about x/y layout coordinates; the visual editor doesn’t care about Seal metadata.

Declarative data degrades gracefully by design. Contrast this with imperative serialization — formats where the order of fields implies a processing sequence, or where consumers must execute embedded logic to reconstruct the data. JSON-LD sidesteps all of that. The @context resolves local terms to global IRIs. The consumer interprets the graph. Nothing in between.

Monotonic Expansion

The pflow.xyz schema has grown three times since its introduction:

YearTerms Added
2024PetriNet, Place, Transition, Arrow, Person
2025Seal, Invariant, Guard, TokenSet
2026CompositeNet, NetType, Link, Interface

Each expansion introduced new terms. No existing term was removed or redefined. A model authored in 2024 using only PetriNet, Place, Transition, and Arrow remains valid under the 2026 schema — not because we tested backwards compatibility, but because the schema only grows.

This is monotonic expansion: new facts can be added, but existing facts are never retracted. It’s the same discipline that makes append-only logs reliable and RDF graphs composable. In a monotonic system, learning more never invalidates what we already know.

Practically, this means old models never break. A Petri net saved before Seal existed still loads in an editor that understands seals. The editor sees a net without seal metadata — a valid state. No migration scripts, no version negotiation, no “this file was created with an older version” warnings. Backwards compatibility emerges from structure, not policy.

Content Addressing via Canonicalization

If JSON-LD is declarative, its identity should derive from what it says, not from how it was serialized. Two documents that make the same assertions — regardless of key order, whitespace, or field arrangement — should hash to the same value.

The URDNA2015 algorithm (RDF Dataset Normalization) makes this possible. It converts any JSON-LD document into a canonical set of N-Quads — sorted, deterministic, order-independent. From there, a standard hash produces a content identifier.

proc := ld.NewJsonLdProcessor()
opts := ld.NewJsonLdOptions("")
opts.Format = "application/n-quads"
opts.Algorithm = "URDNA2015"

normalized, _ := proc.Normalize(doc, opts)     // canonical N-Quads
multihash, _ := mh.Sum([]byte(normalized.(string)), mh.SHA2_256, -1)
cid := cid.NewCidV1(cid.DagJSON, multihash)    // CIDv1 with base58btc

The resulting CID becomes the model’s @id — a self-certifying identifier. If the model changes, the CID changes. If two independently-created models happen to describe the same graph, they get the same CID. Identity follows from content, not from a registry or a counter.

This property is what makes seals trustworthy: a seal is a commitment to a specific graph, and any party can verify it by re-canonicalizing and re-hashing.

The Pipeline

JSON-LD document
    v expand (resolve @context)
    v normalize (URDNA2015 -> canonical N-Quads)
    v hash (SHA-256)
    v encode (CIDv1 with base58btc)
Content-addressed identifier

Every step is deterministic. Two parties processing the same assertions — even serialized differently — arrive at the same CID. This is the content-addressing property: identity from substance, not from naming.

The Categorical View

There’s a clean categorical reading of what @context does. A JSON-LD context is a mapping from local terms (short names like Place, Transition) to global IRIs (like https://pflow.xyz/schema#Place). This is a functor:

It maps the local category of terms used in a document to the global category of well-defined identifiers. Extending a context — adding new term mappings while preserving existing ones — is a natural transformation between functors: the old mapping still holds, and new mappings are layered on top.

The three vocabularies (pflow.xyz, schema.org, ActivityStreams) compose as a coproduct. Each vocabulary contributes its terms to the combined graph without collision, because the IRIs are namespace-disjoint. A document can reference all three contexts simultaneously, and the terms resolve unambiguously.

Content addressing gives us identity in this category. Two objects (JSON-LD documents) are equal if and only if their canonical forms are equal — which is exactly what a content-addressed identifier witnesses.

JSON-LD, categorically: @context is a functor from local names to global identifiers. Schema extension is a natural transformation. Independent vocabularies compose as coproducts. Canonicalization provides identity.

Connecting to Petri Nets

The declarative and monotonic properties of JSON-LD mirror properties of Petri nets themselves:

Declarative. A Petri net model is a declaration of structure — places, transitions, arcs. It doesn’t prescribe an execution order or processing sequence. The firing rule determines what happens at runtime, but the model itself is static assertions about what’s possible. JSON-LD serializes these assertions without adding procedural content.

Monotonic. Adding a place or transition to a Petri net doesn’t invalidate existing structure. New arcs can reference existing elements. Composition (Chapter 4) adds links between independently valid subnets. This mirrors monotonic schema expansion — new terms never break existing ones.

Content-addressable. The canonicalized form of a Petri net model — sorted N-Quads — produces a unique identifier. Two models with the same structure, regardless of how they were authored or serialized, get the same CID. This enables reproducible builds: given a CID, you can verify that a model hasn’t changed.

Practical Implications

Model Interchange

A model created in the pflow.xyz visual editor (JavaScript) can be:

  1. Saved as JSON-LD with @context: "https://pflow.xyz/schema"
  2. Loaded by go-pflow for ODE simulation and analysis
  3. Compiled by petri-pilot into a running service
  4. Sealed with a content-addressed CID for immutable reference
  5. Shared via URL, embedded in blog posts, or stored on IPFS

No format conversion between steps. The same file, the same assertions, consumed by different tools for different purposes.

Schema Evolution

When the pflow schema adds Guard expressions (2025), existing models without guards continue to work. When CompositeNet is added (2026), simple models remain valid — they’re just nets that don’t compose with anything. The consuming tools handle the absence of new terms gracefully because JSON-LD’s open-world assumption means missing fields are simply unknown, not invalid.

Cross-Vocabulary References

A single document can reference multiple vocabularies:

{
  "@context": [
    "https://pflow.xyz/schema",
    "https://schema.org"
  ],
  "@type": "PetriNet",
  "name": "Coffee Shop",
  "author": { "@type": "Person", "name": "stackdump" },
  "places": { ... }
}

The name and author come from schema.org. The places come from pflow.xyz. Both resolve unambiguously because @context is a functor — it maps each local term to exactly one global IRI.

Infrastructure, Not a Feature

JSON-LD is not a feature of the pflow ecosystem. It is infrastructure — the kind that works precisely because it does less, not more. It asserts, it expands, it canonicalizes. Everything else is the consumer’s problem.

This division of responsibility is what makes composable systems possible. The model format doesn’t know about editors, solvers, code generators, or blockchain bridges. It doesn’t need to. It describes a Petri net — places, transitions, arcs — and each consumer extracts what it needs.

The same philosophy that makes Petri nets universal — a small set of primitives that combine to model anything — applies to their serialization. A small set of JSON-LD properties (@context, @type, @id) provides the infrastructure for self-describing, composable, content-addressable models. The complexity lives in the applications. The infrastructure stays simple.

The Visual Editor — pflow.xyz

Learning objective: Design nets visually, export to JSON, and iterate on models interactively.

Parts I through III approached Petri nets through code and mathematics — builder APIs, incidence matrices, ODE solvers. But most people think visually. A Petri net is inherently graphical: circles for places, rectangles for transitions, arrows for arcs. The pflow.xyz editor makes this visual intuition the primary interface. You draw a net, and the underlying formal structure — JSON-LD, topology, initial marking — is generated automatically.

This chapter walks through the editor: how to build nets interactively, how the visual representation maps to the mathematical formalism, and how models travel between the editor and the rest of the ecosystem.

The Editor Interface

The pflow.xyz editor is a browser-based tool built as a single web component (<petri-view>) in vanilla JavaScript — no React, no build step, no npm dependencies. It renders in any modern browser and provides:

  • Canvas — The main drawing area where places and transitions are positioned
  • Toolbar — Controls for adding places, transitions, arcs, and tokens
  • Properties panel — Edit selected element properties (name, initial tokens, arc weight)
  • Simulation controls — Run discrete-event or continuous-time (ODE) simulation directly in the browser

The editor speaks JSON-LD natively. Every model uses @context: "https://pflow.xyz/schema" and @type: "PetriNet", making it self-describing and interoperable with every tool in the ecosystem.

The Web Component

The entire editor is a custom HTML element:

<petri-view model='{"@context":"https://pflow.xyz/schema","@type":"PetriNet",...}'>
</petri-view>

This means the editor can be embedded in any web page — blog posts, documentation, interactive tutorials — by including the script and adding the element. The model attribute accepts a JSON-LD string, and the component handles rendering, editing, and simulation.

Drawing Nets: Click-to-Place Workflow

Building a Petri net in the editor follows a direct manipulation pattern:

Adding Elements

  1. Place: Click the place tool, click the canvas. A circle appears with a default name (P0, P1, …). Click to select, edit the name and initial token count in the properties panel.

  2. Transition: Click the transition tool, click the canvas. A rectangle appears. Name it to describe the action it represents.

  3. Arc: Click the arc tool, click a source element, click a target element. An arrow connects them. Arcs must alternate between places and transitions — the bipartite structure is enforced by the editor. You can’t draw an arc from a place to a place.

  4. Tokens: Set initial token counts in the properties panel, or click directly on a place to increment. The visual representation shows small dots inside places for low counts (1-5) and a number for higher counts.

The Bipartite Constraint

The editor enforces the fundamental Petri net structure: arcs connect places to transitions or transitions to places, never same-type to same-type. This isn’t a UI limitation — it’s the definition of a Petri net. The bipartite graph structure is what makes the mathematical properties (firing rules, conservation laws, reachability) work.

If you try to draw an arc from one place to another, the editor rejects it. The constraint is structural, not a validation check — the data model simply doesn’t support it.

Weighted Arcs

Arc weights default to 1. For resource models like the coffee shop (Chapter 5), you need higher weights — an espresso requires 2 units of water and 1 unit of coffee beans. Select an arc and change its weight in the properties panel. The visual representation shows the weight as a number on the arc.

From Visual to Formal

Every action in the editor produces a corresponding change in the JSON-LD model. The visual and formal representations are synchronized:

Visual ActionFormal Effect
Add place “beans” with 100 tokens"beans": {"@type": "Place", "initial": [100]}
Add transition “brew”"brew": {"@type": "Transition"}
Draw arc from beans to brew, weight 2{"@type": "Arrow", "source": "beans", "target": "brew", "weight": [2]}

The editor stores x/y coordinates for layout but these are presentation-only — they don’t affect the mathematical properties. Two models with identical topology but different layouts are semantically equivalent.

The JSON-LD Model

A complete model from the editor:

{
  "@context": "https://pflow.xyz/schema",
  "@type": "PetriNet",
  "places": {
    "beans": { "@type": "Place", "initial": [100], "x": 100, "y": 200 },
    "water": { "@type": "Place", "initial": [200], "x": 100, "y": 300 },
    "espresso": { "@type": "Place", "initial": [0], "x": 300, "y": 250 }
  },
  "transitions": {
    "brew": { "@type": "Transition", "x": 200, "y": 250 }
  },
  "arcs": [
    { "@type": "Arrow", "source": "beans", "target": "brew", "weight": [2] },
    { "@type": "Arrow", "source": "water", "target": "brew", "weight": [3] },
    { "@type": "Arrow", "source": "brew", "target": "espresso", "weight": [1] }
  ]
}

This file can be:

  • Loaded back into the editor (visual round-trip)
  • Parsed by go-pflow for ODE simulation (Chapter 3)
  • Compiled by petri-pilot into a running service (Chapter 16)
  • Sealed with a content-addressed CID (Chapter 14)
  • Shared via URL

In-Browser Simulation

The editor includes a JavaScript ODE solver (petri-solver.js) that matches the Go solver’s output. This enables immediate feedback — draw a net, set rates, press play, watch token counts evolve.

Discrete-Event Mode

Step through the model one transition at a time. The editor highlights enabled transitions (those with sufficient input tokens). Click a transition to fire it. The token counts update immediately. This is useful for:

  • Understanding firing rules by example
  • Tracing specific execution paths
  • Debugging unexpected behavior (“why can’t this transition fire?”)

Continuous-Time Mode

Run the ODE simulation forward. Token counts become continuous values — a place might hold 47.3 tokens. The editor displays a time-series plot showing how each place’s token count evolves over the simulation period.

The continuous simulation uses the same Tsit5 solver as go-pflow, with the same mass-action kinetics. Models designed in the browser produce the same dynamics when loaded into the Go library. This is the dual-implementation guarantee covered in Chapter 18.

Matching Go and JavaScript

The browser solver uses identical parameters to match the Go solver:

  • Adaptive stepping with Tsit5
  • Default dt=0.01, reltol=1e-3
  • Time span configurable via controls

When a model behaves differently in the browser versus Go, it’s a bug — the outputs should match. This invariant is tested and maintained across both implementations.

Export Formats

The editor supports multiple export paths:

JSON-LD

The native format. Includes all visual layout data (x/y positions) alongside the formal model. Used for interchange between all pflow ecosystem tools.

SVG

The editor can export the current visual as an SVG file — useful for documentation, presentations, and embedding in papers. The SVG includes the graphical representation of places (circles), transitions (rectangles), arcs (arrows), and tokens (dots), preserving the exact visual appearance.

The Go backend also generates SVGs from JSON-LD models using layout algorithms (force-directed, hierarchical, circular), which is useful for automated documentation and reports.

Content-Addressed Storage

When saved through the editor’s API, models are canonicalized via URDNA2015 and hashed to produce a CIDv1 identifier (Chapter 14). The CID becomes a permanent, immutable reference:

POST /api/save  ->  { "cid": "zb2rh..." }
GET /o/zb2rh...  ->  { "@context": "https://pflow.xyz/schema", ... }

The CID is derived from the model’s content, not from a counter or timestamp. Two editors that independently create the same model get the same CID. This enables deduplication and verifiable references.

Sharing and Embedding Models

URL Sharing

Every saved model has a URL: https://pflow.xyz/o/{cid}. Opening this URL loads the model in the editor. Because the CID is content-addressed, the URL is both a link and a guarantee — the model at that URL can never change.

Embedding in Web Pages

The <petri-view> web component can be embedded in any HTML page:

<script src="https://pflow.xyz/petri-view.js"></script>
<petri-view model='...'></petri-view>

This is how interactive models appear in the blog at blog.stackdump.com and in petri-pilot’s model viewer. The embedded component supports the same simulation capabilities as the full editor.

SVG Generation via API

The Go backend provides an SVG rendering endpoint:

GET /img/{cid}.svg
GET /img/{cid}.svg?layout=force-atlas-2
GET /img/{cid}.svg?layout=hierarchical

This generates publication-quality SVG diagrams from any stored model, with configurable layout algorithms. Useful for automated documentation and CI pipelines.

The Editor as a Design Tool

The visual editor is where models typically begin. A domain expert sketches places and transitions — “there’s a waiting room, a triage station, a doctor’s office” — and connects them with arcs. The initial model is rough but captures the essential structure.

Then iteration: add initial tokens, set arc weights, run a simulation, observe the dynamics. “The tokens pile up before the doctor — that’s the bottleneck.” Adjust rates, add a second doctor transition, simulate again. The visual feedback loop is faster than editing code because the model is visible.

Once the model stabilizes, it travels through the ecosystem:

  1. Editor — design and iterate visually
  2. go-pflow — run formal analysis (reachability, P-invariants, sensitivity)
  3. petri-pilot — generate a running application
  4. ZK circuit — add privacy with zero-knowledge proofs

The editor is the entry point. The formalism behind it makes everything else possible. The circles and rectangles on screen are the same places and transitions that appear in incidence matrices, ODE systems, and ZK circuits. The visual representation is sugar. The mathematics is the substance.

Try it live: Design your own Petri net at pflow.xyz, then see generated applications at pilot.pflow.xyz — every example in this book was built with these tools.

Code Generation — From Model to Application

Learning objective: Generate full-stack applications from a Petri net model.

The visual editor (Chapter 15) produces a model. The Go library (Chapter 17) analyzes it. But at some point you need a running application — HTTP endpoints, event storage, a frontend, authentication. Petri-pilot bridges the gap: it takes a Petri net model and generates a complete, deployable application. The generation is deterministic. The same model always produces the same code.

This chapter explains the architecture: how a JSON model becomes a Go backend and JavaScript frontend, why events are separated from bindings, and how customization survives regeneration.

The Pipeline

Petri-pilot’s code generation follows a categorical pattern — a functor from model to code:

Schema --EnrichModel--> Context --Template--> Artifact
  |                        |                      |
Model                  Universal              Go/JS/YAML
(source)               Object                 (target)

Each step is deterministic. No randomness, no external state. Given the same model, you get the same code every time.

Step 1: Parse

The model starts as a JSON file (or the tokenmodel DSL from Chapter 4). The schema parser produces a structured representation: places, transitions, arcs, roles, views, events.

model := schema.Parse(jsonBytes)

Step 2: Validate

Before generating code, petri-pilot validates the Petri net properties. Is the net well-formed? Are all arc sources and targets valid? Do guard expressions parse? This catches errors before they become generated code that doesn’t compile.

result := validator.Validate(model)

Step 3: Build Context

The Context is the universal intermediate representation — the heart of the functor. It takes the flat model and computes everything the templates need:

ctx := golang.NewContext(model, opts)

The Context contains:

  1. Primitives — Places, Transitions, Arcs (from the Petri net)
  2. Derived structures — Events, Routes, Handlers (computed from primitives)
  3. Feature flagsHasViews, HasAdmin, HasNavigation (conditional generation)
  4. Helper methodsTransitionRequiresAuth, GetEnabledTransitions (template utilities)

This is the key design decision: the Context is complete. Every piece of information needed by any template is accessible through it. Templates never reach back into the raw model.

Step 4: Project to Artifacts

Each template file is a projection — a pure function from Context to source code:

Context --api.tmpl------> api.go
Context --workflow.tmpl-> workflow.go
Context --events.tmpl---> events.go
Context --main.tmpl-----> main.js
Context --admin.tmpl----> admin.js

Multiple projections from the same Context produce all the files needed for a running application.

Inside-Out Design

Traditional code generation is “outside-in”: templates consume ad-hoc data structures, and the templates encode most of the logic. Petri-pilot inverts this. The model contains everything needed to describe an application — state machine topology, access control, UI structure, operational config. Code is a projection of this model into a target language.

This means the model is readable by humans, LLMs, and formal analysis tools. The generated code is an implementation detail. You modify the model, not the code.

Events First

The most important architectural pattern in petri-pilot is the separation of events from bindings.

Events: The Complete Record

An event is the full business record emitted when a transition fires. It captures everything relevant to the domain:

{
  "id": "tokens_transferred",
  "name": "Tokens Transferred",
  "fields": [
    {"name": "from", "type": "string", "required": true},
    {"name": "to", "type": "string", "required": true},
    {"name": "amount", "type": "number", "required": true},
    {"name": "memo", "type": "string"},
    {"name": "transaction_id", "type": "string"}
  ]
}

Events are complete, immutable, and auditable. They form the event log that can reconstruct any past state.

Bindings: Operational Data

Bindings extract only what’s needed for state computation — the minimal data for guards and arc transformations:

{
  "id": "transfer",
  "event": "tokens_transferred",
  "guard": "balances[from] >= amount && amount > 0",
  "bindings": [
    {"name": "from", "type": "string", "keys": ["from"]},
    {"name": "to", "type": "string", "keys": ["to"]},
    {"name": "amount", "type": "number", "value": true}
  ]
}

The tokens_transferred event has five fields. The transfer binding uses three. The memo and transaction ID are preserved in the event for audit but don’t participate in state computation.

Why Separate Them?

The separation enables independent evolution. You can add fields to an event (new audit data, metadata) without changing the state logic. You can modify how bindings compute state without affecting the event schema. Events answer “what happened in the business domain?” Bindings answer “what data do I need to validate and update state?”

This maps directly to Petri net semantics. The event is the record of a transition firing. The binding is the data needed to check enabledness (guards) and compute the new marking (arc transformations).

Event Names from Transitions

Event types are derived, not declared separately. A transition named validate_order produces an event type OrderValidated. A transition named ship produces event Ship. The event schema is the transition schema — no redundant definitions.

Generated File Structure

A complete generated application contains:

Backend (Go)

FilePurpose
workflow.goPetri net definition — places, transitions, arcs
aggregate.goEvent-sourced aggregate — applies events to state
api.goHTTP handlers — REST endpoints for each transition
events.goEvent types — structs for each event
views.goView definitions — query handlers for UI
auth.goAuthentication — role-based access (if roles defined)
main.goEntry point — wires everything together

Frontend (ES Modules)

FilePurpose
src/main.jsEntry point, routing
src/admin.jsAdmin dashboard — instance listing, management
src/views.jsInstance views — detail pages, state display
custom/extensions.jsUser customizations (preserved across regeneration)
custom/theme.cssCustom styling (preserved)

The backend uses Go’s standard library. The frontend uses vanilla ES modules — no React, no build step, no npm. Both are generated from the same Context.

Guards as Expressions

Access control and preconditions use a DSL, not embedded code:

{
  "guard": "balances[from] >= amount && from != to"
}

The guard expression is:

  • Readable — both humans and LLMs understand it
  • Verifiable — can be analyzed statically before code generation
  • Portable — the same expression generates both Go and JavaScript code

The DSL parser (pkg/dsl/parser.go) handles arithmetic, comparisons, boolean operators, and map lookups. Guards are evaluated at runtime against the current state to determine transition enabledness — exactly the Petri net firing rule, extended to data-carrying nets.

Feature Flags from Model Presence

Templates use conditional generation based on what the model contains:

{{if .HasAdmin}}
// Generate admin dashboard code
{{end}}

The flag is computed from model presence:

func (c *Context) HasAdmin() bool {
    return c.Admin != nil && c.Admin.Enabled
}

No configuration files. No feature toggles. If admin config exists in the model, admin code generates. If the model defines roles, auth code generates. If views are defined, view handlers generate. The model is the configuration.

Customization Architecture

Generated code is regenerated when the model changes. But users need to customize behavior without losing changes. Petri-pilot solves this with a two-directory pattern:

frontend/
+-- src/              # REGENERATED -- core application code
|   +-- main.js
|   +-- admin.js
|   +-- views.js
+-- custom/           # PRESERVED -- user customizations
    +-- extensions.js # Hook registrations
    +-- components.js # Custom web components
    +-- theme.css     # Custom styling

Files in src/ are overwritten on every generation. Files in custom/ are created once (with sensible defaults) and never overwritten — the generator uses a SkipIfExists flag.

Extension Points

Generated code defines hooks that custom code can register with:

// custom/extensions.js
adminExtensions.customActions.push({
  label: 'Export JSON',
  className: 'btn btn-secondary',
  onClick: (instance) => {
    const blob = new Blob([JSON.stringify(instance, null, 2)])
    // download logic
  }
})

Available extension points include custom action buttons, custom table columns, state renderers, lifecycle callbacks (on create, delete, archive), and custom view sections. The generated code checks for registered extensions and invokes them at the appropriate points.

The Serving Layer

When multiple generated services run together, petri-pilot provides a unified serving layer:

Unified GraphQL API

All services implementing the GraphQLService interface are combined into a single schema at /graphql. Operations are namespaced by service — blogpost_create, tictactoe_move. A combined schema endpoint at /schema returns the full SDL.

Model Viewer

The serving layer includes a Petri net viewer at /pflow that renders any model using the pflow.xyz <petri-view> web component (Chapter 15). It converts the internal model format to JSON-LD and passes it to the web component for interactive visualization and simulation.

GraphQL Playground

An interactive playground at /graphql/i provides:

  1. Editor — Standard GraphQL playground for writing queries
  2. Operations Explorer — Parses the SDL to list all queries and mutations grouped by service
  3. Models Panel — Displays each model’s Petri net as SVG, with events, roles, places, and transitions

The Complete Workflow

A typical petri-pilot workflow:

  1. Design — Define the model as JSON or tokenmodel DSL
  2. Validatepetri_validate checks structure and Petri net properties
  3. Simulatepetri_simulate verifies workflow behavior before generating code
  4. Generatepetri_codegen produces the full application
  5. Customize — Edit custom/extensions.js for app-specific behavior
  6. Iterate — When the model changes, regenerate. Customizations survive.

The model is the contract. Everything else — events, routes, handlers, views, admin panels — is derived. Change the model, regenerate, and the application reflects the new structure. The Petri net formalism makes this possible: because the model has formal semantics (places, transitions, firing rules), the code generator knows exactly what each element means and how to implement it.

Try it live: Every generated application — Tic-Tac-Toe, Coffee Shop, Texas Hold’em, Knapsack — is available at pilot.pflow.xyz.

The go-pflow Library

Learning objective: Use the core Go library for simulation, analysis, and integration.

The previous chapters used go-pflow implicitly — building nets, running simulations, analyzing results. This chapter makes the library itself the subject. It covers the package structure, the fluent builder API, solver configuration, and the higher-level abstractions (state machines, workflows, actors) that compile down to Petri nets.

Think of this chapter as the reference that ties together everything from Parts I through III.

Package Overview

go-pflow is organized into packages that map to different concerns:

PackagePurpose
petriCore types, fluent Builder API
solverODE solvers (Tsit5, RK45, implicit), equilibrium detection
stateutilState map utilities (Copy, Apply, Merge, Sum, Diff)
reachabilityDiscrete state space, deadlock and liveness analysis
hypothesisMove evaluation for game AI
sensitivityParameter sensitivity analysis
cacheMemoization for repeated simulations
eventlogParse CSV/JSONL event logs
miningProcess discovery (Alpha, Heuristic), rate learning
monitoringReal-time case tracking, SLA alerts
statemachineHierarchical states compiled to Petri nets
workflowTask dependencies, resources, SLA tracking
actorActor model with message passing
visualizationSVG rendering
tokenmodelToken model schemas with DSL

Decision Tree

ProblemPackage
Business workflowsworkflow
Event-driven state machinesstatemachine
Message-passing systemsactor
Game AI and move evaluationhypothesis, cache
Parameter optimizationsensitivity
Process discovery from logsmining, eventlog
Deadlock and liveness checkingreachability
Population dynamics (epidemics, ecology)petri + solver
General state and resource flowpetri

Building Nets

The fluent builder API constructs Petri nets in a single expression:

net := petri.Build().
    Place("A", 10).Place("B", 0).
    Transition("t1").
    Arc("A", "t1", 1).Arc("t1", "B", 1).
    Done()

Each method returns the builder, allowing chaining. Place takes a name and initial token count. Transition takes a name. Arc takes source, target, and weight. Done finalizes and returns the *petri.Net.

Chain Helper

For linear sequences — common in workflows — the Chain helper eliminates repetitive arc declarations:

net := petri.Build().
    Chain(10, "start", "t1", "middle", "t2", "end").
    Done()

This creates places start (10 tokens), middle, and end, transitions t1 and t2, and all connecting arcs. The first argument is the initial token count for the first place.

Rates

Transition rates are separate from the net structure. The WithRates method assigns a default rate to all transitions:

net, rates := petri.Build().
    Place("S", 100).Place("I", 1).Place("R", 0).
    Transition("infect").Transition("recover").
    Arc("S", "infect", 1).Arc("I", "infect", 1).Arc("infect", "I", 2).
    Arc("I", "recover", 1).Arc("recover", "R", 1).
    WithRates(1.0)

Rates can also be set individually via a map[string]float64. The separation of structure and rates is deliberate — the same net can be simulated with different rate constants for sensitivity analysis or parameter fitting.

Shortcut Builders

Common model patterns have dedicated builders:

// SIR epidemic model
net, rates := petri.Build().SIR(999, 1, 0).WithRates(1.0)

This creates the susceptible-infected-recovered model with the specified initial populations and a default rate of 1.0 for both infection and recovery transitions.

Running Simulations

The solver package implements numerical ODE integration:

prob := solver.NewProblem(net, net.SetState(nil), [2]float64{0, 100}, rates)
sol := solver.Solve(prob, solver.Tsit5(), solver.DefaultOptions())
final := sol.GetFinalState()

NewProblem takes the net, initial state, time span, and rates. Solve runs the integration. The result contains the full trajectory — token counts at each time step for every place.

Solver Methods

MethodDescription
Tsit5()Tsitouras 5th-order Runge-Kutta. Default choice. Adaptive stepping, high accuracy.
RK45()Classic Runge-Kutta 4/5. Similar to Tsit5, slightly different error estimation.
Implicit()For stiff systems where explicit methods require impractically small steps.

Tsit5 is the right choice for nearly all Petri net simulations. The adaptive step size automatically shrinks where dynamics are fast (early transients) and grows where dynamics are smooth (approaching equilibrium).

Solver Presets

PresetUse Case
DefaultOptions()General purpose — balanced accuracy and speed
FastOptions()Game AI, interactive use — ~10x faster, lower accuracy
AccurateOptions()Research, publishing — high precision
GameAIOptions()Move evaluation — optimized for hypothesis testing
EpidemicOptions()SIR/SEIR models — tuned for population dynamics

Each preset configures step size (Dt), tolerance (RelTol, AbsTol), and maximum steps. For most applications, DefaultOptions() works. For interactive use where latency matters more than precision, FastOptions() trades accuracy for speed.

Equilibrium Detection

Many models converge to a steady state where token counts stop changing:

finalState, reached := solver.FindEquilibrium(prob)

The equilibrium detector monitors the derivative norm. When all derivatives are below a threshold — meaning no place’s token count is changing significantly — the simulation stops early. This is faster than simulating to a fixed time horizon and checking the result.

Reading Results

The solution object provides the full trajectory:

sol := solver.Solve(prob, solver.Tsit5(), solver.DefaultOptions())

// Final state
final := sol.GetFinalState()
fmt.Println(final["espresso"]) // 47.3

// Full trajectory
for i, t := range sol.T {
    fmt.Printf("t=%.1f: beans=%.1f\n", t, sol.U[i]["beans"])
}

sol.T is the time points. sol.U[i] is the state at time sol.T[i] — a map from place names to token counts. The time points are not evenly spaced — the adaptive solver places more points where dynamics are fast.

State Manipulation

The stateutil package provides utilities for working with state maps:

import "github.com/pflow-xyz/go-pflow/stateutil"

// Apply a delta to a state
newState := stateutil.Apply(state, map[string]float64{"pos": 0, "mark": 1})

// Sum all token counts
total := stateutil.Sum(state)

// Compute difference between two states
changes := stateutil.Diff(before, after)

// Deep copy
copy := stateutil.Copy(state)

These utilities are used throughout the library — in hypothesis evaluation, reachability analysis, and monitoring. They handle the common pattern of manipulating map[string]float64 state representations.

Reachability Analysis

For discrete analysis — where tokens are integers and transitions fire one at a time — the reachability package explores the state space:

analyzer := reachability.NewAnalyzer(net).WithMaxStates(10000)
result := analyzer.Analyze()

The result reports:

  • Bounded — whether all places have finite token counts across all reachable states
  • HasCycle — whether the state space contains cycles (the system can return to a previous state)
  • Live — whether every transition can eventually fire from every reachable state
  • Deadlocks — states where no transition is enabled

This is the discrete counterpart to ODE simulation. Where ODE simulation tracks continuous dynamics (Chapter 3), reachability analysis exhaustively enumerates discrete states. It’s exact but computationally expensive — the state space grows exponentially with the number of places. The WithMaxStates limit prevents runaway exploration.

Hypothesis Evaluation

The hypothesis package evaluates candidate moves in game models (Chapter 6):

eval := hypothesis.NewEvaluator(net, rates, func(final map[string]float64) float64 {
    return final["wins"] - final["losses"]
})

// Find best move among candidates
bestIdx, _ := eval.FindBestParallel(state, []map[string]float64{move1, move2, move3})

// Sensitivity analysis — which transitions matter most?
impact := eval.SensitivityImpact(state)

The evaluator works by simulating each candidate move forward to equilibrium and scoring the result. FindBestParallel runs all candidates concurrently. The scoring function is user-defined — for tic-tac-toe it might maximize win tokens; for a resource model it might maximize throughput.

The cache package memoizes simulation results, so repeated evaluations of the same state (common in game tree search) are served from cache rather than re-simulated.

Higher-Level Abstractions

The core petri package provides the mathematical foundation. Three packages build domain-specific abstractions on top.

State Machines

The statemachine package implements hierarchical statecharts that compile to Petri nets:

chart := statemachine.NewChart("light").
    Region("state").
        State("red").Initial().
        State("green").
        State("yellow").
    EndRegion().
    When("timer").In("state:red").GoTo("state:green").
    When("timer").In("state:green").GoTo("state:yellow").
    When("timer").In("state:yellow").GoTo("state:red").
    Build()

m := statemachine.NewMachine(chart)
m.SendEvent("timer")
m.State("state")  // "green"

The statechart API is familiar to anyone who has used UML state diagrams. Under the hood, each state becomes a place, each transition becomes a Petri net transition, and the mutual exclusion constraint (only one state active per region) is enforced by the Petri net structure. Hierarchical states (nested regions) compose naturally.

Workflows

The workflow package models task dependencies with resources and SLA tracking:

wf := workflow.New("order").
    ManualTask("receive", "Receive", 2*time.Minute).
    AutoTask("validate", "Validate", 30*time.Second).
    ManualTask("ship", "Ship", 5*time.Minute).
    From("receive").Then("validate").To("ship").
    Start("receive").End("ship").
    WithSLA(4 * time.Hour).
    Build()

engine := workflow.NewEngine(wf)
engine.StartCase("case-1", nil, workflow.PriorityMedium)

Tasks can be manual (require human action) or automatic (fire when enabled). The SLA tracker monitors elapsed time per case and raises alerts when deadlines approach. The underlying Petri net provides formal properties — deadlock freedom means every started case can complete, and conservation laws ensure resources are properly allocated.

Actors

The actor package implements the actor model — concurrent entities that communicate via message passing:

system := actor.NewSystem("sys").DefaultBus().
    Actor("worker").
        Handle("task", func(ctx *actor.ActorContext, s *actor.Signal) error {
            ctx.Emit("done", map[string]any{"result": "ok"})
            return nil
        }).
        Done().
    Start()

Each actor is a Petri net. Message receipt is a transition. Processing is a sequence of internal transitions. Message sending is an output transition. The actor system composes actor nets through shared places (the message bus). Concurrency, deadlock, and liveness properties of the actor system follow from Petri net analysis of the composed net.

Sensitivity Analysis

The sensitivity package measures how changes in parameters affect simulation outcomes:

sa := sensitivity.NewAnalyzer(net, rates, baseState)
results := sa.AnalyzeAll()

for _, r := range results {
    fmt.Printf("%s: impact=%.4f\n", r.Parameter, r.Impact)
}

The analyzer perturbs each rate constant individually, re-simulates, and measures the change in the final state. High-impact parameters are bottlenecks — places where small changes in rate produce large changes in outcome. This connects to the bottleneck analysis from Chapter 5 (coffee shop) and the what-if scenarios from Chapter 11 (process mining).

Process Mining Integration

The mining and monitoring packages (covered in depth in Chapter 11) complete the library. They connect external data to Petri net models:

// Parse event logs
log, _ := eventlog.ParseCSV("events.csv", eventlog.DefaultCSVConfig())

// Discover process model
result, _ := mining.Discover(log, "heuristic")
net := result.Net

// Learn rates from timing data
rates := mining.LearnRatesFromLog(log, net)

// Check how well the model fits the data
conf := mining.CheckConformance(log, net)

The same *petri.Net type produced by the builder API or the mining algorithms feeds into the solver, reachability analyzer, and visualization. The library doesn’t distinguish between hand-crafted and data-discovered models.

SVG Rendering

The visualization package generates SVG diagrams from nets:

svg := visualization.RenderSVG(net, visualization.ForceAtlas2Layout)

Layout algorithms include force-directed (ForceAtlas2), hierarchical (top-to-bottom), and circular. The generated SVGs show places as circles, transitions as rectangles, arcs as arrows, and token counts as numbers or dots. These SVGs appear in the pflow.xyz editor (Chapter 15), the petri-pilot model viewer (Chapter 16), and throughout this book.

Putting It Together

The library’s packages compose. A typical application might:

  1. Build a net with the fluent API (petri)
  2. Simulate with the ODE solver (solver)
  3. Analyze reachability for correctness (reachability)
  4. Evaluate moves for game AI (hypothesis)
  5. Tune parameters via sensitivity analysis (sensitivity)
  6. Mine rates from production logs (mining)
  7. Monitor running instances for SLA compliance (monitoring)
  8. Visualize the net and simulation results (visualization)

Each package does one thing. The shared *petri.Net type is the lingua franca — every package consumes and produces it. This is the practical benefit of the universal abstraction: a single formalism, implemented once, applied everywhere.

Dual Implementation and Verification

Learning objective: Achieve specification parity between Go and JavaScript implementations.

Every tool in the pflow ecosystem processes the same JSON-LD models. The Go library runs ODE simulations on the server. The JavaScript solver runs them in the browser. If they produce different results from the same input, the specification is ambiguous — and any downstream analysis, proof, or decision based on those results is suspect.

This chapter covers the dual-implementation discipline: why both Go and JavaScript implementations exist, how parity is verified, and how event sourcing ties them together.

Why Two Implementations?

A specification written in prose is ambiguous. A specification implemented in one language might contain bugs that look like features. But a specification implemented in two independent languages — and verified to produce identical output — is trustworthy in a way that neither implementation alone can be.

The pflow ecosystem maintains two implementations:

GoJavaScript
RoleAuthoritative backendBrowser verification
Solversolver/tsit5.gopetri-solver.js
Petri net modelpetri/ packagepetri-sim.js
Runs inServer, CLIBrowser
Used bygo-pflow, petri-pilotpflow.xyz editor

Both implementations parse the same JSON-LD format. Both implement the same Tsit5 ODE solver. Both use the same mass-action kinetics for transition rates. The claim: given identical input (model + initial state + rates + time span), they produce identical output (final state within numerical tolerance).

State Root Parity

The strongest form of verification: compute a hash of the simulation output and compare. If the Go server and the JavaScript browser produce the same hash from the same model, the implementations agree.

JSON-LD Model
    |
    v
Go Solver    ->  State trajectory  ->  Hash  --+
                                                +--  Equal?
JS Solver    ->  State trajectory  ->  Hash  --+

This is the same principle as content addressing (Chapter 14), applied to simulation results. The hash doesn’t prove the simulation is correct in some absolute sense — it proves the two implementations agree. And agreement between independent implementations, written in different languages by different code paths, is strong evidence of correctness.

What Gets Compared

The comparison operates at the level of the state trajectory:

  1. Time points — Both solvers use adaptive stepping, so they may choose different time points. The comparison evaluates at fixed checkpoints.
  2. Token counts — For each place at each checkpoint, the token count must match within tolerance (typically 1e-6).
  3. Final state — The most important check: do both solvers arrive at the same equilibrium?

The tolerance accounts for floating-point differences between Go and JavaScript — different rounding behavior, different intermediate precision. The tolerance is tight enough to catch algorithmic errors but loose enough to accommodate platform differences.

The Tsit5 Algorithm — Twice

Both implementations use the Tsitouras 5th-order Runge-Kutta method (Tsit5). The algorithm is well-documented in the numerical methods literature, and the coefficients are published constants. But implementing it correctly requires getting many details right:

Go Implementation

// solver/tsit5.go
func (m Tsit5Method) Step(f ODEFunction, t float64, y []float64, dt float64) ([]float64, float64) {
    // 7 stages with Tsitouras coefficients
    k1 := f(t, y)
    k2 := f(t + c2*dt, addVec(y, scaleVec(dt*a21, k1)))
    // ... 5 more stages
    // 5th-order solution
    yNew := addVec(y, scaleVec(dt, linComb(b1, k1, b2, k2, ...)))
    // 4th-order embedded solution for error estimation
    yErr := addVec(y, scaleVec(dt, linComb(e1, k1, e2, k2, ...)))
    // Adaptive step size control
    err := errorNorm(yNew, yErr)
    return yNew, err
}

JavaScript Implementation

// petri-solver.js
function tsit5Step(f, t, y, dt) {
    // Same 7 stages, same coefficients
    const k1 = f(t, y);
    const k2 = f(t + c2*dt, addVec(y, scaleVec(dt*a21, k1)));
    // ... 5 more stages
    // Same 5th-order solution
    const yNew = addVec(y, scaleVec(dt, linComb(b1, k1, b2, k2, ...)));
    // Same error estimation and step control
    const err = errorNorm(yNew, yErr);
    return [yNew, err];
}

The algorithms are structurally identical. The coefficients are the same published constants. The adaptive step size control uses the same formula. The only differences are syntactic — Go vs JavaScript — and numerical — 64-bit floats in both, but with platform-specific rounding.

Parameters Must Match

Both solvers use identical default parameters:

  • Adaptive stepping with Tsit5
  • Default dt = 0.01
  • Relative tolerance reltol = 1e-3
  • Same time span from the model

When a model behaves differently in the browser versus the Go backend, it’s a bug. The outputs should match. This invariant is tested and maintained across both implementations.

Event Sourcing

The dual-implementation discipline extends beyond ODE simulation to state management. The pflow ecosystem uses event sourcing as its state management pattern:

State is not stored directly. Instead, the system stores an immutable log of events (transition firings), and the current state is reconstructed by replaying the events from the beginning.

Events as Transition Firings

In Petri net terms, an event is a transition firing:

{
  "event_type": "brew",
  "aggregate_id": "coffeeshop-1",
  "timestamp": "2024-01-19T10:30:00Z",
  "version": 42,
  "data": {
    "beans_used": 2,
    "water_used": 3
  }
}

The event records which transition fired, when, and with what data. The state change (tokens removed from input places, tokens added to output places) is deterministic given the event and the current marking.

Replay Equivalence

If both Go and JavaScript replay the same event log against the same model, they must arrive at the same final state. This is a stronger guarantee than ODE parity because it covers discrete-event execution, not just continuous simulation.

Event log: [brew, brew, restock, brew]

Go replay:   fold(apply, initial, events)  ->  State A
JS replay:   fold(apply, initial, events)  ->  State B

State A == State B  (must hold)

The apply function is the Petri net firing rule: check enabledness, subtract input tokens, add output tokens. Because the firing rule is deterministic and both implementations follow the same rule, replay produces identical states.

Why Event Sourcing Fits

Event sourcing and Petri nets are natural partners:

  1. Events are transitions. Every transition firing is an event. The event log is the execution trace of the Petri net.

  2. State is the marking. The current marking (token counts across all places) is the state. It’s computed by applying the firing rule for each event in sequence.

  3. Replay is re-execution. Given the initial marking and the event log, you can reconstruct the state at any point in history. This enables auditing, debugging, and time-travel queries.

  4. The model is the schema. The Petri net topology defines what events are valid (which transitions exist, what their pre- and post-conditions are). The model constrains the event log — you can’t have a brew event if the net has no brew transition.

The Verification Loop

In production, the verification works as a loop:

  1. Client (browser, JavaScript) fires a transition and computes the new state locally
  2. Client sends the transition firing to the server
  3. Server (Go) independently computes the new state
  4. Server stores the event and returns the new state
  5. Client compares its locally computed state with the server’s response
  6. If they match: continue — both implementations agree
  7. If they don’t match: flag a discrepancy — investigation needed

This is the same pattern used in the ZK tic-tac-toe protocol (Chapter 12), except without the cryptographic proofs. The browser is a verifier: it independently computes what the server claims and checks the result.

Trust Model

The verification loop establishes a trust model:

  • The server is authoritative — its state is the canonical state
  • The client is a verifier — it checks the server’s work
  • Discrepancies indicate a bug in one or both implementations
  • Agreement increases confidence that the specification is unambiguous

No single implementation is trusted absolutely. Trust comes from agreement between independent implementations. This is the dual-implementation guarantee.

Token Model Equivalence

The tokenmodel package in go-pflow extends dual implementation to the schema level. Token models can be defined in three equivalent ways:

  1. JSON — the interchange format
  2. Go struct tags — embedded in Go code
  3. S-expression DSL — the human-readable format from Chapter 4

All three produce the same Petri net. The tokenmodel package includes tests verifying that parsing each format and building the net produces structurally identical results. This is isomorphism at the schema level — three notations, one mathematical object.

What Parity Proves

Dual implementation doesn’t prove the specification is correct in some absolute sense. It proves the specification is unambiguous — two independent teams, reading the same spec, implementing in different languages, arrive at the same behavior.

For Petri nets, unambiguous means:

  • The firing rule is clear — both implementations agree on when a transition is enabled
  • The state update is deterministic — both implementations produce the same marking after firing
  • The ODE dynamics are correct — both solvers produce the same trajectories within numerical tolerance
  • The event sourcing is faithful — replaying the same events produces the same state

These are exactly the properties you need for a formal modeling system. The mathematics works only if the implementation matches the theory. Dual implementation is how you check.

The Ecosystem View

Dual implementation is not just a testing technique — it’s an architectural principle. Every tool in the ecosystem can verify every other tool’s work:

JSON-LD Model (source of truth)
    |
    +-- pflow.xyz (JavaScript)  --  ODE simulation, visual editing
    |
    +-- go-pflow (Go)  --  ODE simulation, reachability, analysis
    |
    +-- petri-pilot (Go)  --  Code generation, event sourcing
    |
    +-- ZK circuits (gnark)  --  Cryptographic proofs

The JSON-LD model is the shared contract. Each tool processes it independently. Each tool can verify its results against any other tool processing the same model. The model format is declarative (Chapter 14), so there’s no ambiguity about what it means — only about how each tool computes from it. And dual implementation resolves that ambiguity.

This is the practical payoff of the universal abstraction. One formalism — Petri nets. One format — JSON-LD. Multiple implementations — Go, JavaScript, gnark circuits. Verification — by agreement. The mathematics provides the theory. Dual implementation provides the confidence that the code matches the mathematics.

Epilogue: What the Abstraction Sits On

Learning objective: Name the layered architecture this book actually built, see the six applications through the lens of net types, and identify the open problems worth pursuing.

This book promised a universal abstraction. It delivered one — but along the way, especially in Chapter 13, something shifted. The Petri net turned out to be a layer, not the foundation. The most powerful result in the book — that strategic value emerges from graph connectivity with no training data, no domain knowledge, and no Petri net firing semantics — lives underneath the formalism the book is named for.

That deserves to be said plainly, not buried in a subsection.

The Four-Layer Stack

The book built a stack, one layer at a time, without naming it as such until now:

Layer 4: ZK Verification     (Chapters 12-13)
         Cryptographic proof that a transition was valid.
         The stoichiometry matrix becomes circuit constraints.

Layer 3: ODE Dynamics         (Chapters 3, 5-10)
         Mass-action kinetics couples topology to state.
         Rate formula: v[t] = k[t] × ∏ M[inputs[t]]

Layer 2: Petri Net Semantics  (Chapters 1-2, 4)
         Firing rules, conservation laws, P-invariants.
         Tokens consumed and produced atomically.

Layer 1: Graph Theory          (Chapter 13)
         Bipartite directed graph. Degree centrality.
         Connectivity determines what matters.

Each layer adds something the layer below cannot express:

  • Graph theory tells you what connects to what. It cannot tell you what happens when you act — there are no tokens, no state, no dynamics.
  • Petri net semantics add state and atomicity. Transitions consume and produce. Conservation laws constrain the state space. But the formalism alone doesn’t tell you what happens first, or how fast.
  • ODE dynamics add time. Mass-action kinetics couple topology-derived rates to the current marking. You get trajectories, equilibria, predictions. But the trajectories are only as trustworthy as the implementation that computed them.
  • ZK verification adds proof. The stoichiometry matrix defines circuit constraints. A state transition is either provably valid or unprovable. Trust moves from “I ran the code” to “here is a cryptographic attestation.”

The book introduced these layers bottom-up in Parts I-III, but the reader encounters the stack’s true shape only in Chapter 13, when the rate auto-derivation reveals that the bottom layer — pure graph connectivity — carries more information than expected. The classic tic-tac-toe heuristic (center > corner > edge) falls out of counting connections. No game theory. No training. Just topology.

The Petri net is not the foundation. It is the modeling layer — the place where graph structure acquires semantics. That’s valuable. It’s just not the whole story.

Six Applications, Five Types

Chapter 4 introduced the categorical net taxonomy before the reader had examples to anchor it. Now, after six worked applications in Part II, the taxonomy earns its weight:

ChapterApplicationNet TypeDefining Property
5Coffee ShopResourceNetConservation — ingredients are neither created nor destroyed
6Tic-Tac-ToeGameNetTurn control + conservation — pieces placed, never removed
7SudokuClassificationNetConstraint accumulation — each placement is evidence toward a solved board
8KnapsackComputationNetContinuous relaxation — ODE finds approximate optima
9Enzyme KineticsComputationNetNative domain — mass-action kinetics is the chemistry
10Texas Hold’emGameNetMulti-phase workflow + role-based turn control

The pattern: you never had to tell the Petri net what kind of system it was modeling. The net type emerged from how you wired the arcs. A ResourceNet conserves tokens because the topology conserves them — every arc into a transition has a matching arc out. A GameNet alternates turns because a turn-control place gates player transitions through mutual exclusion.

The taxonomy isn’t a labeling scheme imposed from outside. It’s a description of structural invariants that the topology either has or doesn’t. This is the same insight as “it’s graph theory, not Petri net theory,” seen from a different angle: the structure carries the meaning.

What the Book Proved

Three claims survived from Chapter 1 to Chapter 18:

Small models beat black boxes. Every application in this book is inspectable. You can look at the tic-tac-toe topology and count win lines. You can read the stoichiometry matrix and see the differential equations. You can audit the ZK circuit and verify what it proves. At no point did you need to trust a model you couldn’t read. This is the opposite of the machine learning approach, where the knowledge is in the weights and the weights are opaque. The cost is that Petri net models require a human to design the topology. The benefit is that the topology is the explanation.

One formalism, multiple tools. The JSON-LD model format (Chapter 14) is processed identically by the visual editor (Chapter 15), the code generator (Chapter 16), the Go library (Chapter 17), and the ZK compiler (Chapter 13). Dual implementation (Chapter 18) verifies that independent implementations agree. This isn’t a theoretical property — it’s tested, deployed, and running on-chain.

Topology is primary, rates are secondary. Change the rate constants and the system’s quantitative behavior shifts. Change the topology and the system becomes a different system. This inversion — structure over parameters — holds across all six applications and both modes (combinatorial and continuous). It’s the book’s most load-bearing claim, and Chapter 13 gave it a precise formulation.

What the Book Didn’t Solve

The limitations section of Chapter 13 was honest, but it was framed as caveats. They’re better understood as open problems.

Multi-hop connectivity. The rate auto-derivation counts direct connections: candidate → unique output → target input. For tic-tac-toe (depth 1), this is sufficient. For chess (depth varies), it captures material value but misses tactics. For Go (depth 19×19), it captures almost nothing. The question: can multi-hop reachability analysis — T-invariants, unfoldings, or iterative message-passing over the bipartite graph — extend the one-hop algorithm to deeper games? This is a graph theory question, not a Petri net question, which is itself instructive.

Weighted targets. The algorithm treats every target connection as weight 1. A checkmate path and a pawn capture score the same. The fix seems straightforward — assign importance weights to targets — but the principled question is where those weights come from. Can topology derive them recursively? Or does heterogeneous objective weighting require domain knowledge that the graph alone cannot supply?

Dynamic rates. Topology-derived rates are static. A corner’s strategic value changes mid-game when it completes a fork threat. The tactical scoring layer in Chapter 6 handles this for tic-tac-toe, but it’s an add-on, not part of the rate derivation. Can the rate formula incorporate state-dependent topology — recomputing connectivity over the reachable subgraph rather than the full graph? This would unify the strategic (topology) and tactical (state) layers.

Circuit scaling. The selector-based encoding grows as O(|P| × |T|). The tic-tac-toe circuit has ~24,500 constraints. A net with 1,000 places and 500 transitions would have ~12.5 million constraints — feasible with current hardware but pushing limits. Recursive proof composition (proving batches of transitions, then proving the batch proofs) is the likely path forward. The Petri net structure may help here: independent subnets can be proved in parallel and composed.

Composition verification. Chapter 4 described cross-schema composition with EventLinks, DataLinks, TokenLinks, and GuardLinks. Chapter 13 described single-net ZK verification. The gap: proving that a composed system of multiple nets preserves the invariants of each component. Assume-guarantee reasoning suggests this is tractable — each component’s proof is independent, and composition only needs to verify the boundaries. But the ZK pipeline doesn’t implement this yet.

The Premise, Revisited

Chapter 1 opened with a complaint: informal models fail because they don’t capture the structure of the systems they represent. Concurrency is an afterthought. Resources are invisible. State is implicit.

Petri nets fix this by making structure explicit. Places hold state. Transitions change it. Arcs constrain what can flow where. Conservation laws fall out of the topology. The model is the specification.

But the deeper lesson — the one that emerged through writing this book, not before it — is that the Petri net formalism is itself a layer over something simpler. The structure that matters most is the directed bipartite graph. The Petri net adds semantics to that graph. The ODE adds dynamics. The ZK circuit adds proof. Each layer is useful. None is the whole story.

If there’s a single sentence version of what this book argues, it might be: the topology of a system — what connects to what, through what — determines more about its behavior than any amount of parameter tuning, training data, or runtime optimization. The Petri net is one way to read that topology. It turned out to be a very good way. But the topology was always there, waiting to be read.

Appendix A: Solver Parameter Reference

This appendix documents the ODE solver methods, configuration options, and presets available in go-pflow’s solver package.

Solver Methods

Tsit5 (Default)

Tsitouras 5th-order Runge-Kutta with embedded 4th-order error estimator. The recommended solver for nearly all Petri net simulations.

  • Order: 5 (error ~ )
  • Stages: 7
  • Adaptive: Yes (embedded error estimation)
  • Reference: Ch. Tsitouras, “Runge-Kutta pairs of order 5(4) satisfying only the first column simplifying assumption”, Computers & Mathematics with Applications, 62 (2011) 770-775
sol := solver.Solve(prob, solver.Tsit5(), solver.DefaultOptions())

RK45 (Dormand-Prince)

Classic Dormand-Prince 5(4) method. The algorithm behind MATLAB’s ode45. Comparable to Tsit5 with slightly different error estimation characteristics.

  • Order: 5
  • Stages: 7
  • Adaptive: Yes
  • Reference: J.R. Dormand & P.J. Prince, “A family of embedded Runge-Kutta formulae”, Journal of Computational and Applied Mathematics, 6 (1980) 19-26
sol := solver.Solve(prob, solver.RK45(), solver.DefaultOptions())

BS32 (Bogacki-Shampine)

3rd-order method with embedded 2nd-order error estimator. Useful when Tsit5/RK45 are overkill — simpler problems with less stringent accuracy requirements.

  • Order: 3
  • Stages: 4
  • Adaptive: Yes
  • Reference: P. Bogacki & L.F. Shampine, “A 3(2) pair of Runge-Kutta formulas”, Appl. Math. Lett., 2 (1989) 321-325

RK4 (Classic Runge-Kutta)

Fixed-step 4th-order method. No error estimation. Use with Adaptive: false.

  • Order: 4
  • Stages: 4
  • Adaptive: No (fixed step)
  • Use case: Teaching, debugging, when fixed step size is desired

Heun (Improved Euler)

2nd-order predictor-corrector method. Simple and fast.

  • Order: 2
  • Stages: 2
  • Adaptive: No

Euler (Forward Euler)

1st-order method. Primarily for teaching and debugging.

  • Order: 1
  • Stages: 1
  • Adaptive: No
  • Warning: Requires very small step sizes for accuracy

Implicit Euler

For stiff systems where explicit methods require impractically small steps. Uses Newton iteration to solve the implicit step.

sol := solver.ImplicitEuler(prob, solver.StiffOptions())

TRBDF2

Second-order implicit method combining trapezoidal rule with backward differentiation. More accurate than implicit Euler for stiff systems.

sol := solver.TRBDF2(prob, solver.StiffOptions())

SolveImplicit (Auto-Detection)

Automatically detects stiffness and switches between explicit (Tsit5) and implicit (TRBDF2) methods.

sol := solver.SolveImplicit(prob, solver.DefaultOptions())

Configuration Parameters

The Options struct controls solver behavior:

ParameterTypeDescription
Dtfloat64Initial time step
Dtminfloat64Minimum allowed time step
Dtmaxfloat64Maximum allowed time step
Abstolfloat64Absolute error tolerance
Reltolfloat64Relative error tolerance
MaxitersintMaximum number of iterations
AdaptiveboolEnable adaptive step size control

How Adaptive Stepping Works

When Adaptive is true, the solver:

  1. Computes both the 5th-order and 4th-order solutions at each step
  2. Estimates the local error as the difference between them
  3. Computes a scaled error: err = |error| / (Abstol + Reltol * max(|y|, |y_new|))
  4. If err <= 1: accepts the step and grows Dt (up to Dtmax)
  5. If err > 1: rejects the step and shrinks Dt (down to Dtmin)

The step size factor is: factor = 0.9 * (1/err)^(1/(order+1))

Option Presets

General Purpose

PresetDtDtminDtmaxAbstolReltolMaxitersUse Case
DefaultOptions()0.011e-60.11e-61e-3100,000Most problems
JSParityOptions()0.011e-61.01e-61e-3100,000Match JavaScript solver
FastOptions()0.11e-41.01e-21e-21,000Interactive, ~10x speedup
AccurateOptions()0.0011e-80.11e-91e-61,000,000Research, publishing
StiffOptions()0.0011e-100.011e-81e-5500,000Stiff systems

Domain-Specific

PresetDtDtminDtmaxAbstolReltolMaxitersUse Case
GameAIOptions()0.11e-31.01e-21e-2500Move evaluation
EpidemicOptions()0.011e-60.51e-61e-4200,000SIR/SEIR models
WorkflowOptions()0.11e-410.01e-41e-350,000Process simulation
LongRunOptions()0.11e-410.01e-51e-3500,000Extended simulations

Equilibrium Detection

The equilibrium detector stops the simulation early when the system reaches steady state.

Equilibrium Parameters

ParameterTypeDescription
Tolerancefloat64Maximum rate of change to consider as equilibrium
ConsecutiveStepsintNumber of consecutive steps below tolerance required
MinTimefloat64Minimum time before checking for equilibrium
CheckIntervalintCheck every N steps (0 = every step)

Equilibrium Presets

PresetToleranceConsecutiveMinTimeInterval
DefaultEquilibriumOptions()1e-650.110
FastEquilibriumOptions()1e-430.015
StrictEquilibriumOptions()1e-9101.01

Convenience Functions

// Find equilibrium with default settings
state, reached := solver.FindEquilibrium(prob)

// Fast equilibrium (aggressive settings)
state, reached := solver.FindEquilibriumFast(prob)

// Accurate equilibrium (strict settings)
state, reached := solver.FindEquilibriumAccurate(prob)

Combined Option Pairs

For convenience, OptionPair bundles solver and equilibrium options:

pair := solver.GameAIOptionPair()
sol, eq := solver.SolveUntilEquilibrium(prob, solver.Tsit5(), pair.Solver, pair.Equilibrium)

Available pairs: GameAIOptionPair(), EpidemicOptionPair(), WorkflowOptionPair(), LongRunOptionPair().

When to Use Which Solver

SituationSolverOptions
First attempt / don’t knowTsit5()DefaultOptions()
Match browser outputTsit5()JSParityOptions()
Game AI move evaluationTsit5()GameAIOptions()
Interactive / real-timeTsit5()FastOptions()
Publishing resultsTsit5()AccurateOptions()
SIR/SEIR epidemicsTsit5()EpidemicOptions()
Process mining validationTsit5()WorkflowOptions()
System won’t convergeImplicitEuler()StiffOptions()
Auto-detect stiffnessSolveImplicit()DefaultOptions()
Fixed step (teaching)RK4()Adaptive: false

If Tsit5() with DefaultOptions() doesn’t work for your problem, the most common fixes are:

  1. Too slow: Use FastOptions() (trades accuracy for speed)
  2. Inaccurate: Use AccurateOptions() (tighter tolerances)
  3. Unstable/NaN: Use StiffOptions() with ImplicitEuler() (stiff system)
  4. Doesn’t converge: Extend Tspan, increase Maxiters, or check for conservation violations

Appendix B: Token Language Grammar

This appendix documents the S-expression DSL for defining token model schemas. The DSL is an alternative to JSON for defining Petri net models — more readable for humans, equivalent in expressiveness.

Lexical Elements

The lexer recognizes these token types:

TokenPatternExamples
LParen((
RParen))
Arrow->->
Keyword: followed by symbol:type, :guard, :keys, :value, :initial, :exported, :kind
Symbolletter or _, then letters/digits/_/-/[/]/.balances, map[string]int64, v1.0.0
Numberdigits, optionally prefixed with -42, -1, 0
String"..." with backslash escapes"hello", "line\n"
Guard{...} with nested brace support{amount > 0}, {balances[from] >= amount}
Comment; to end of line; this is ignored

Whitespace (spaces, tabs, newlines) separates tokens but is otherwise ignored.

Top-Level Structure

A schema definition has the form:

(schema <name>
  (version <version-string>)       ; optional
  (states ...)                     ; required
  (actions ...)                    ; required
  (arcs ...)                       ; required
  (constraints ...))               ; optional

The <name> is a symbol identifying the schema. Clauses may appear in any order, but the conventional order is version, states, actions, arcs, constraints.

States

The states clause defines places in the Petri net:

(states
  (state <id> :type <type> :kind <kind> :initial <value> :exported))

State Keywords

KeywordValueDefaultDescription
:typesymbol(none)Data type: string, int64, float64, bool, map[string]int64, map[string]string, map[string]map[string]int64
:kindtoken or datadataWhether the place holds a token count or structured data
:initialnumber, string, or nil(none)Initial value
:exported(no value)falseFlag — presence means the state is externally visible

All keywords are optional. A minimal state definition is just (state <id>).

Examples

; Token-counting place with initial tokens
(state pending :kind token :initial 1)

; Data place holding a map
(state balances :type map[string]int64 :exported)

; Simple data place with initial value
(state total_supply :type int64 :initial 0 :exported)

Actions

The actions clause defines transitions:

(actions
  (action <id> :guard {<expression>}))

Action Keywords

KeywordValueDescription
:guardguard expression in {...}Boolean precondition for firing

Examples

; Action with no guard (always enabled when inputs are available)
(action ship)

; Action with guard expression
(action transfer :guard {balances[from] >= amount && amount > 0})

; Action with simple guard
(action mint :guard {amount > 0})

Arcs

The arcs clause defines connections between states and actions:

(arcs
  (arc <source> -> <target> :keys (<key1> <key2> ...) :value <binding>))

The -> arrow separates source from target. Arcs are bipartite: they connect a state to an action (input arc) or an action to a state (output arc).

Arc Keywords

KeywordValueDescription
:keysparenthesized list of symbolsMap access keys for data places
:valuesymbolValue binding name (defaults to amount)

Examples

; Simple token arc (no keywords needed)
(arc pending -> ship)
(arc ship -> shipped)

; Data arc with keys and value
(arc balances -> transfer :keys (from) :value amount)
(arc transfer -> balances :keys (to) :value amount)

; Arc with multiple keys
(arc ledger -> settle :keys (from to) :value amount)

Constraints

The constraints clause defines invariants:

(constraints
  (constraint <id> {<expression>}))

The expression is enclosed in {...} braces, like guard expressions. Constraints are checked at validation time and should hold for all reachable markings.

Examples

; Conservation law
(constraint conservation {sum(balances) == totalSupply})

; Mutual exclusion
(constraint mutex {pending + shipped + delivered == 1})

Guard Expression Syntax

Guard expressions (used in both :guard and constraint) support:

CategoryOperatorsExamples
Comparison==, !=, <, >, <=, >=amount > 0
Boolean&&, ||, !a > 0 && b > 0
Arithmetic+, -, *, /sum(balances)
Map accessname[key]balances[from]
Aggregatessum(), count(), tokens(), minOf(), maxOf()sum(balances) == totalSupply

Guard expressions are opaque to the DSL parser — the lexer captures everything between { and } as a single token. The guard DSL parser in petri-pilot evaluates these expressions at runtime against the current state.

Complete Examples

ERC-20 Token

(schema erc20-token
  (version v1.0.0)

  (states
    (state balances :type map[string]int64 :exported)
    (state total_supply :type int64 :initial 0 :exported))

  (actions
    (action transfer :guard {balances[from] >= amount && amount > 0})
    (action mint :guard {amount > 0}))

  (arcs
    (arc balances -> transfer :keys (from) :value amount)
    (arc transfer -> balances :keys (to) :value amount)
    (arc mint -> balances :keys (to) :value amount)
    (arc mint -> total_supply :value amount))

  (constraints
    (constraint conservation {sum(balances) == total_supply})))

Order Workflow

(schema order-processing
  (version v1.0.0)

  (states
    (state received :kind token :initial 1)
    (state validated :kind token)
    (state shipped :kind token)
    (state completed :kind token))

  (actions
    (action validate)
    (action ship)
    (action deliver))

  (arcs
    (arc received -> validate)
    (arc validate -> validated)
    (arc validated -> ship)
    (arc ship -> shipped)
    (arc shipped -> deliver)
    (arc deliver -> completed))

  (constraints
    (constraint conservation {received + validated + shipped + completed == 1})))

AST Representation

The parser produces an abstract syntax tree with these node types:

NodeFieldsDescription
SchemaNodeName, Version, States, Actions, Arcs, ConstraintsRoot node
StateNodeID, Type, Kind, Initial, ExportedA place definition
ActionNodeID, GuardA transition definition
ArcNodeSource, Target, Keys, ValueA connection
ConstraintNodeID, ExprAn invariant

The AST is the same regardless of whether the schema was defined via the S-expression DSL, the Go struct tag syntax, or the fluent builder API. All three produce identical SchemaNode trees, which are then compiled to Petri nets.

Equivalence with JSON

Every DSL schema has an equivalent JSON representation. The mapping is direct:

DSLJSON
(state id :kind token :initial 1){"id": "id", "kind": "token", "initial": 1}
(action id :guard {expr}){"id": "id", "guard": "expr"}
(arc src -> tgt :keys (k) :value v){"from": "src", "to": "tgt", "keys": ["k"], "value": "v"}
(constraint id {expr}){"id": "id", "expr": "expr"}

The DSL is syntactic sugar. The JSON schema (Appendix C) is the canonical format.

Appendix C: JSON Schema Reference

This appendix documents the JSON model format used by petri-pilot for code generation. The schema defines what a valid Petri net model looks like — places, transitions, arcs, and the higher-level structures (roles, views, navigation, admin, event sourcing, simulation) that drive application generation.

The full JSON Schema is published at github.com/pflow-xyz/petri-pilot/schema/petri-model.schema.json.

Top-Level Structure

A model is a JSON object with four required fields and several optional ones:

{
  "name": "order-processing",
  "version": "1.0.0",
  "description": "Order fulfillment workflow",
  "places": [...],
  "transitions": [...],
  "arcs": [...],
  "constraints": [...],
  "roles": [...],
  "access": [...],
  "views": [...],
  "navigation": {...},
  "admin": {...},
  "eventSourcing": {...},
  "simulation": {...}
}
FieldRequiredTypeDescription
nameYesstringUnique identifier, kebab-case (^[a-z][a-z0-9-]*$)
versionNostringSemantic version (e.g., "1.0.0")
descriptionNostringHuman-readable description
placesYesarrayStates in the Petri net (min 1)
transitionsYesarrayActions/events (min 1)
arcsYesarrayConnections between places and transitions (min 1)
constraintsNoarrayInvariants that must hold
rolesNoarrayNamed roles for access control
accessNoarrayAccess control rules
viewsNoarrayUI view definitions
navigationNoobjectNavigation menu configuration
adminNoobjectAdmin dashboard configuration
eventSourcingNoobjectSnapshot and retention configuration
simulationNoobjectODE simulation configuration

Places

A place holds state — either token counts (classic Petri net) or structured data.

{
  "id": "order_received",
  "description": "Order has been received but not yet validated",
  "initial": 1,
  "kind": "token",
  "type": "string",
  "initial_value": null,
  "exported": false,
  "persisted": false
}
FieldRequiredTypeDefaultDescription
idYesstringUnique identifier, snake_case (^[a-z][a-z0-9_]*$)
descriptionNostringHuman-readable description
initialNointeger0Initial token count (min 0)
kindNo"token" or "data""token"Whether this place holds token counts or structured data
typeNostringData type for data kind places
initial_valueNoanyInitial value for data places
exportedNobooleanfalseWhether externally visible
persistedNobooleanfalseWhether persisted in event store

Data Types

For kind: "data" places, the type field specifies the data type:

TypeDescription
stringText value
int6464-bit integer
float6464-bit floating point
boolBoolean
map[string]int64Map from string keys to integer values
map[string]stringMap from string keys to string values
map[string]map[string]int64Nested map (e.g., per-user balances by asset)

Transitions

A transition is an action that fires when enabled. Each transition becomes an HTTP endpoint in the generated application.

{
  "id": "validate_order",
  "description": "Validate the order details",
  "guard": "amount > 0",
  "event_type": "OrderValidated",
  "http_method": "POST",
  "http_path": "/api/validate",
  "bindings": {
    "amount": "body.amount"
  }
}
FieldRequiredTypeDefaultDescription
idYesstringUnique identifier, snake_case
descriptionNostringUsed in OpenAPI spec
guardNostringBoolean precondition (guard DSL)
event_typeNostringPascalCase of idCustom event type name
http_methodNostring"POST"GET, POST, PUT, DELETE, PATCH
http_pathNostring/api/{id}Custom HTTP path
bindingsNoobjectParameter bindings from request

Guard Syntax

Guards are boolean expressions evaluated against the current state:

OperatorMeaningExample
==, !=Equalitystatus == 'approved'
<, >, <=, >=Comparisonamount > 0
&&, ||, !Booleana > 0 && b > 0
name[key]Map accessbalances[from]

Arcs

An arc connects a place to a transition (input) or a transition to a place (output).

{
  "from": "balances",
  "to": "transfer",
  "weight": 1,
  "keys": ["from"],
  "value": "amount"
}
FieldRequiredTypeDefaultDescription
fromYesstringSource element ID
toYesstringTarget element ID
weightNointeger1Tokens consumed/produced (min 1)
keysNoarray of stringsMap access keys for data places
valueNostring"amount"Value binding name

Constraints

An invariant that must always hold.

{
  "id": "conservation",
  "expr": "received + validated + shipped + completed == 1"
}
FieldRequiredTypeDescription
idYesstringUnique identifier
exprYesstringBoolean expression over place token counts

Roles

Named roles for access control, with inheritance.

{
  "id": "admin",
  "name": "Administrator",
  "description": "Full access to all operations",
  "inherits": ["manager"]
}
FieldRequiredTypeDescription
idYesstringUnique role identifier, snake_case
nameNostringHuman-readable name
descriptionNostringWhat this role represents
inheritsNoarray of stringsParent role IDs (inherits their permissions)

Access Rules

Maps transitions to allowed roles.

{
  "transition": "approve_order",
  "roles": ["manager", "admin"],
  "guard": "user.department == 'finance'"
}
FieldRequiredTypeDescription
transitionYesstringTransition ID or "*" for all
rolesNoarray of stringsAllowed roles (empty = any authenticated user)
guardNostringAdditional guard expression

Views

UI view definitions for forms, tables, and detail pages.

{
  "id": "order_detail",
  "name": "Order Details",
  "kind": "detail",
  "description": "Shows order information",
  "groups": [...],
  "actions": ["approve", "reject"]
}
FieldRequiredTypeDescription
idYesstringUnique view identifier
nameNostringDisplay name
kindNostring"form", "card", "table", or "detail"
descriptionNostringWhat this view shows
groupsNoarrayLogical groupings of fields
actionsNoarray of stringsTransition IDs triggerable from this view

View Groups

{
  "id": "customer_info",
  "name": "Customer Information",
  "fields": [...]
}

View Fields

{
  "binding": "customer_email",
  "label": "Email Address",
  "type": "email",
  "required": true,
  "readonly": false,
  "placeholder": "customer@example.com"
}

Field types: text, number, email, date, datetime, select, checkbox, textarea, password, hidden.

Navigation menu configuration. Generates /api/navigation endpoint.

{
  "brand": "Order Tracker",
  "items": [
    {
      "label": "Orders",
      "path": "/orders",
      "icon": "box",
      "roles": []
    },
    {
      "label": "Admin",
      "path": "/admin",
      "icon": "gear",
      "roles": ["admin"]
    }
  ]
}

Admin

Admin dashboard configuration. Generates /admin/* endpoints.

{
  "enabled": true,
  "path": "/admin",
  "roles": ["admin"],
  "features": ["list", "detail", "history", "transitions"]
}

Features: list (instance listing), detail (instance detail page), history (event history), transitions (manual transition firing).

Event Sourcing

Snapshot and retention configuration.

{
  "snapshots": {
    "enabled": true,
    "frequency": 100
  },
  "retention": {
    "events": "90d",
    "snapshots": "1y"
  }
}

Retention durations use the pattern ^\d+[dwmy]$ — number followed by d (days), w (weeks), m (months), or y (years).

Simulation

ODE simulation configuration for AI move evaluation.

{
  "objective": "win_x - win_o",
  "players": {
    "x": {
      "maximizes": true,
      "turnPlace": "turn_x",
      "transitions": ["move_x_0", "move_x_1"]
    },
    "o": {
      "maximizes": false,
      "turnPlace": "turn_o",
      "transitions": ["move_o_0", "move_o_1"]
    }
  },
  "solver": {
    "tspan": [0, 10],
    "dt": 0.01,
    "rates": {
      "move_x_0": 1.0,
      "move_o_0": 1.0
    }
  }
}

Objective Functions

The objective field uses guard DSL syntax extended with aggregate functions:

FunctionDescriptionExample
tokens('place')Token count at a placetokens('goal')
sum('place')Sum of values in a data placesum('score')
count('place')Count of entries in a map placecount('inventory')
minOf('place')Minimum valueminOf('health')
maxOf('place')Maximum valuemaxOf('score')

Solver Configuration

FieldTypeDefaultDescription
tspan[number, number][0, 10]Simulation time span
dtnumber0.01Initial time step
ratesobjectall 1.0Per-transition firing rates

Minimal Example

The smallest valid model:

{
  "name": "toggle",
  "places": [
    {"id": "off", "initial": 1},
    {"id": "on"}
  ],
  "transitions": [
    {"id": "switch"}
  ],
  "arcs": [
    {"from": "off", "to": "switch"},
    {"from": "switch", "to": "on"}
  ]
}

This defines a one-shot toggle: a token starts in off, the switch transition fires, and the token moves to on. Three fields, two places, one transition, two arcs — the simplest possible Petri net application.

Appendix D: Glossary

Arc
A directed connection between a place and a transition (input arc) or a transition and a place (output arc). Arcs define the flow relation of the net. Each arc has a weight (default 1) specifying how many tokens are consumed or produced.
Bipartite graph
A graph whose nodes divide into two disjoint sets, with edges only between sets, never within. Petri nets are bipartite: places connect to transitions, transitions connect to places, but places never connect directly to places.
Bounded
A net is bounded if every place has a finite maximum token count across all reachable markings. A net is -bounded if no place ever exceeds tokens. Boundedness is a safety property — it guarantees the system can’t accumulate unbounded resources.
CID (Content Identifier)
A hash-based identifier for content-addressed data. In the pflow ecosystem, CIDs identify JSON-LD models deterministically — the same model always produces the same CID regardless of where or when it’s computed (Chapter 14).
Conservation law
An equation where is a vector of weights and is the incidence matrix. Conservation laws prove that weighted sums of token counts remain constant across all firings. Example: in an SIR model, for all time.
Context (code generation)
The universal intermediate representation in petri-pilot’s code generation pipeline. The Context computes everything templates need from the raw model — derived structures, feature flags, helper methods. Templates are pure functions from Context to source code (Chapter 16).
Continuous relaxation
Replacing discrete token counts with continuous concentrations and firing rules with differential equations. The Petri net becomes a system of ODEs via mass-action kinetics, enabling simulation with numerical solvers (Chapter 3).
Deadlock
A marking where no transition is enabled. The system is stuck — no further computation is possible. Deadlock detection is a key analysis performed by the reachability package.
Enabled
A transition is enabled at a marking if every input place has at least as many tokens as the corresponding arc weight: for all input places . An enabled transition may fire; a disabled one cannot.
Equilibrium
A state where all derivatives are approximately zero — the system has stopped changing. The solver’s equilibrium detector monitors the derivative norm and stops early when the system reaches steady state.
Event sourcing
A state management pattern where state is computed by replaying an immutable log of events from the initial state: . In Petri net terms, events are transition firings and state is the marking (Chapter 18).
Firing rule
The execution semantics of Petri nets. When a transition fires, it atomically removes tokens from input places and adds tokens to output places according to arc weights. The new marking is where is the unit vector for transition .
Groth16
A zero-knowledge proof system used in pflow for verifying Petri net transitions without revealing state. Produces constant-size proofs with fast verification, at the cost of a per-circuit trusted setup (Chapter 12).
Guard
A boolean expression attached to a transition that must evaluate to true for the transition to be enabled, in addition to the standard token requirements. Guards extend the firing rule with data-dependent conditions.
Incidence matrix
The matrix where is the output matrix (tokens produced) and is the input matrix (tokens consumed). Each column represents a transition; each row represents a place. The entry is the net change in place when transition fires (Chapter 2).
JSON-LD
JSON for Linked Data. The pflow ecosystem uses JSON-LD as its model interchange format, with @context: "https://pflow.xyz/schema". JSON-LD makes models self-describing and content-addressable (Chapter 14).
Liveness
A transition is live if it can eventually fire from every reachable marking (possibly after other transitions fire first). A net is live if all transitions are live. Liveness means the system never permanently loses capability.
Marking
The distribution of tokens across all places at a given moment; the state of the Petri net. Written as a vector where is the token count at place . The initial marking is the starting state.
Mass-action kinetics
A rate law from chemistry where the rate of a reaction is proportional to the product of reactant concentrations. Applied to Petri nets: the continuous firing rate of transition is where is the rate constant and is the arc weight (Chapter 3).
MiMC (Minimum Multiplicative Complexity)
A hash function designed for efficiency inside arithmetic circuits. Used in pflow’s zero-knowledge proofs to hash Petri net markings with minimal constraint count (Chapter 12).
Monotonic expansion
The property that tokens can only be created, never destroyed. Used in game nets where history places (prefixed with _) accumulate records of moves without ever losing them (Chapter 6).
ODE (Ordinary Differential Equation)
An equation relating a function to its derivatives. The continuous relaxation of a Petri net produces a system of ODEs: where is the rate vector (Chapter 3).
P-invariant
A vector such that . The weighted sum is constant for all reachable markings. P-invariants prove conservation laws — they identify quantities that are preserved by every transition firing (Chapter 2).
Petri net
A mathematical modeling language consisting of places, transitions, arcs, and tokens. Formally, a tuple where is a set of places, is a set of transitions, is the flow relation, is the weight function, and is the initial marking.
Place
A node in a Petri net representing a condition, resource, or state. Places hold tokens. Drawn as circles. In the continuous relaxation, token counts become continuous concentrations.
Reachability graph
The graph of all markings reachable from the initial marking through sequences of transition firings. Each node is a marking, each edge is a transition firing. The reachability graph may be infinite for unbounded nets.
State equation
The algebraic relation where is the firing count vector (how many times each transition has fired). A necessary but not sufficient condition for reachability — the actual firing sequence must also be valid.
State root
A hash of the simulation output used for cross-implementation verification. If the Go server and JavaScript browser produce the same state root from the same model, the implementations agree (Chapter 18).
Token
A marker in a place. Tokens represent the presence of a resource, the truth of a condition, or a unit of something being processed. In the discrete model, tokens are non-negative integers. In the continuous relaxation, they become non-negative real numbers.
Transition
A node in a Petri net representing an event, action, or chemical reaction. Transitions consume tokens from input places and produce tokens in output places. Drawn as rectangles. In the continuous relaxation, transitions have firing rates.
Tsit5
Tsitouras 5th-order Runge-Kutta method with embedded 4th-order error estimator. The default ODE solver in go-pflow and pflow.xyz. Uses adaptive step size control for automatic accuracy management (Appendix A).
URDNA2015
Universal RDF Dataset Normalization Algorithm 2015. Used in pflow to canonicalize JSON-LD models before hashing, ensuring that semantically equivalent documents produce identical content identifiers regardless of key order or formatting (Chapter 14).
Weight
The number of tokens consumed or produced by an arc when its transition fires. A weight of on an input arc means the transition requires at least tokens in the source place and removes tokens when firing. Default weight is 1.
Zero-knowledge proof
A cryptographic protocol where a prover convinces a verifier that a statement is true without revealing any information beyond the truth of the statement. In pflow, ZK proofs verify that a Petri net transition was validly fired without revealing the marking (Chapter 12).