The Azimuth Project
Blog - network theory (Biamonte guest posts) (changes)

Showing changes from revision #28 to #29: Added | Removed | Changed

This page contains material on stochastic Petri nets and chemical reaction networks written by Jacob Biamonte. It may go into one or more blog posts. To discuss this page as it is being written, go to the Azimuth Forum. See also Blog - network theory (amoeba post) and Blog - network theory (examples).

Chemical reaction networks

This paper:

• D. F. Anderson, G. Craciun and T.G. Kurtz, Product-form stationary distributions for deficiency zero chemical reaction networks, arXiv:0803.3042.

proves that for a large class of stochastic Petri nets, there exist equilibrium solutions of the master equation where the number of things in each state is distributed according to a Poisson distribution. Even more remarkably, these probability distributions are independent, so that knowing how many things are in one state tells you nothing about how many are in another. A nice quote from the paper is

“The surprising aspect of the deficiency zero theorem is that the assumptions of the theorem are completely related to the network of the system whereas the conclusions of the theorem are related to the dynamical properties of the system.”

For a stochastic Petri net to have these marvellous properties, it suffices that it have ‘deficiency zero’ and obey the ‘complex balancing condition’. These conditions are widely used in studying equilibrium solutions for the rate equation of a chemical reaction network. A good introductory treatment to these conditions is:

• Jeremy Gunawardena, Chemical reaction network theory for in-silico biologists.

However, Anderson noted that the complex balancing condition is also equivalent to the ‘nonlinear traffic equations’ in queuing theory, as explained here:

• F. Kelly, Reversibility and Stochastic Networks, Wiley, New York, 1979.

• J. Mairesse and H.-T. Nguyen, Deficiency zero Petri nets and product form, arXiv:0905.3158.

So, we see that these ideas show up in quite different applications. Section 6 of the following paper:

• Eduardo Sontag and Doron Zeilberger, Symbolic computational approach to a problem involving multivariate Poisson distributions, Advances in Applied Mathematics 44 (2010), 359-377.

quickly reviews this material and gives examples. Here we will study some examples using quantum field theory techniques.

From chemical reaction networks to Petri nets

Chemical reaction networks are a well used formalism to describe the differential equations emerging from chemical reaction modelling. This formalism turns out to be just as descriptive as the Petri net formalism, and each of these networks can be translated into the other. Our focus here will be to translate chemical reaction networks into Petri nets, so we can apply methods from the Petri net field theory we talked about before.

Chemical reaction networks

In chemical reactions, we consider chemical species as the inputs, intermittent and resulting chemicals from a given reaction.

Definition. (Chemical reaction network [Anderson et al.]). Let S={S i}S= \{S_i\}, C={v k}C=\{v_k\}, and R={v kv k}R=\{v_k\rightarrow v_k'\} denote the sets of species, complexes, and reactions, respectively. The triple {S,C,R}\{S, C, R\} is called a chemical reaction network.

Example I

A simple reversible reaction

This stochastic Petri net describes a reversible reaction, for example going between two chemical species:

Following our general rules, time evolution will be generated by this Hamiltonian:

H=β(a y a xN x)+γ(a x a yN y)=β(a y a xa x a x)+γ(a x a ya y a y)H = \beta (a_y^\dagger a_x - N_x) + \gamma (a_x^\dagger a_y - N_y) = \beta (a_y^\dagger a_x - a_x^\dagger a_x) + \gamma(a_x^\dagger a_y - a_y^\dagger a_y)

We want to show that the following is an equilibrium solution of the master equation:

ψ= m,n0ψ m,nx my n=e bxe cy\psi = \sum_{m,n \ge 0} \psi_{m,n} x^m y^n = e^{b x}e^{c y}

meaning that Hψ=0H\psi = 0.

We break the Hamiltonian HH into two parts

H=H 1+H 2H = H_1 + H_2

with

H 1=β(a y a xa x a x)H_1 = \beta (a_y^\dagger a_x - a_x^\dagger a_x)
H 2=γ(a x a ya y a y)H_2 = \gamma(a_x^\dagger a_y - a_y^\dagger a_y)

and note that

H 1e bxe cy=α(yxxx)e bxe cy=β(yx)be bxe cyH_1 e^{b x}e^{c y} = \alpha (y\frac{\partial}{\partial x} - x \frac{\partial}{\partial x}) e^{b x}e^{c y} = \beta (y-x)b e^{b x}e^{c y}
H 2e bxe cy=β(xyyy)e bxe cy=γ(xy)ce bxe cyH_2 e^{b x}e^{c y} = \beta (x\frac{\partial}{\partial y} - y \frac{\partial}{\partial y}) e^{b x}e^{c y} = \gamma (x-y)c e^{b x}e^{c y}

We now see that Hψ=0H \psi = 0 if and only if

βb=γc\beta b = \gamma c

So, if γ0\gamma \ne 0, we must have

c=kb c = k b

where

k=β/γk = \beta/\gamma

Thus we have a one-parameter family of equilibrium solutions

ψ=e kbxe by\psi = e^{k b x}e^{b y}

Note that in all these solutions, the number of xx and yy particles are independent Poisson distributions, just as the general theorem says. Namely, the probability of finding mm particles of type xx and nn particles of type yy is proportional to:

ψ m,n=(kb) nn!b mm! \psi_{m,n} = \frac{(k b)^n}{n!} \frac{b^m}{m!}

(We have not bothered to normalize this probability distribution.) More precisely, the number of xx particles is a Poisson distribution with mean kbk b, while the number of yy particles is a Poisson distribution with mean bb. So, the mean number of xx particles is kk times the mean number of yy particles in any of these equilibrium solutions.

Example II

Digression on the logistic equation

Stability of solutions

A stable equilibrium is an equilibrium solution such that all solutions with sufficiently close initial data converge to that equilibrium solution as t+t \to + \infty.


Baez-Brain Dump, star date: 4-12-2011

Today I ran into a massive Baez quasar, which resulted in my sensors being overloaded with 7 pages of raw data.

Stochastic Petri Net

States: S={1,...,k}S=\{1,...,k\}

Transitions: TT. We consider τT\tau \in T such that

τ\tau has inputs m(τ)=(m 1(τ),...,m k(τ))N km(\tau) = (m_1(\tau), ..., m_k(\tau))\in N^k

τ\tau has outputs n(τ)=(n 1(τ),...,n k(τ))N kn(\tau) = (n_1(\tau), ..., n_k(\tau))\in N^k

and rate constant r(τ)Rr(\tau)\in R

Rate equation

Starting from a petri net, we arrive at a rate equation.

Let x i(t)x_i(t) be the amount of concentration of the ith state (i=1,...,k)(i=1,...,k)

dx i(t)dt= τTr(τ)x 1 m 1(τ)(t)x k m k(τ)(t)(n i(τ)m i(τ))\frac{d x_i(t)}{dt} = \sum_{\tau\in T} r(\tau) x_1^{m_1(\tau)}(t) \cdots x_k^{m_k(\tau)}(t)(n_i(\tau)-m_i(\tau))

Theorem 1. (Feinberg). If the Petri net is weakly reversible and has deficiency 0 then x iR k\exists x_i\in R^k such that, RHS = 0.

Example

[PIC Here]

k=1k=1 T={1,2}T=\{1,2\} rate constants r 1r_1 and r 2r_2.

dx 1dt=r 1x 1 2+r 2x 1\frac{dx_1}{dt} = -r_1 x_1^2 + r_2x_1

x 1R\exists x_1 \in R such that,

r 1x 1 2+r 2x 1=0-r_1x_1^2+r_2x_1=0

Master equation

We also arrive at a master equation.

Now say we have a probability,

ψ i 1...i k\psi_{i_1...i_k} of having i 1i_1 things in state 1, i ki_k things in state kk etc. We write

ψ=ψ i 1...i kz 1 i 1z k i k\psi = \sum \psi_{i_1...i_k}z_1^{i_1}\cdots z_k^{i_k}

and a i ψ=z iψa_i^\dagger \psi = z_i \psi, a iψ=z iψa_i\psi = \frac{\partial}{\partial z_i} \psi then the master equation says

ddtψ=Hψ\frac{d}{dt}\psi = H\psi

Each transition corresponds to an operator. There is a term a 1 n 1a k n 1a 1 m 1a k m ka_1^{\dagger n_1}\cdots a_k^{\dagger n_1} a_1^{m_1}\cdots a_k^{m_k}

but we also have to subtract off an additional factor, given by the number operator. This results in

a 1 n 1a k n 1a 1 m 1a k m kN 1 m̲ 1N k m̲ ka_1^{\dagger n_1}\cdots a_k^{\dagger n_1} a_1^{m_1}\cdots a_k^{m_k} - N_1^{\underline{m}_1}\cdots N_k^{\underline{m}_k}

where N i=a i a iN_i = a^\dagger_i a_i and

N i m̲ i=N i(N i1)(N i2)(N im i+1)N_i^{\underline m_i} = N_i(N_i -1)(N_i-2)\cdots (N_i-m_i+1)

Explain falling towers here.

In general

H= τTr(τ)(a 1 n 1(τ)a k n k(τ)a 1 m 1(τ)a k m k(τ)N 1 m̲ 1N k m̲ k)H = \sum_{\tau \in T} r(\tau) (a_1^{\dagger n_1(\tau)}\cdots a_k^{\dagger n_k(\tau)}a_1^{m_1(\tau)}\cdots a_k^{m_k(\tau)}- N_1^{\underline m_1}\cdots N_k^{\underline m_k})

Solutions.

To solve Hψ=0H\psi = 0, try

ψ= i=1 k n i=0 α i n in i!z i n i= i=1 ke α izi\psi = \prod_{i=1}^k \sum_{n_i = 0}^\infty \frac{\alpha_i^{n_i}}{n_i!} z_i^{n_i} = \prod_{i=1}^k e^{\alpha_iz_i}

Pics for later

Digression on coherent states

We make heavy use of coherent states in this project.

We let a a^\dagger (aa) be creation (destruction) operators and define the number operator as N:=a aN:= a^\dagger a. Then

  • [N,a ]=a [N, a^\dagger]=a^\dagger, [a,N]=a[a, N]=a and [a,a ]=1[a, a^\dagger] = 1

Markov processes

We could be more general, but let’s keep things simple and suppose we a physical system with some set of states XX. Then a Markov process gives a rule for computing, at any time t0t \ge 0, the probability of hopping from any state jXj \in X to any state iXi \in X. This rule gives us a matrix U(t)U(t), but this matrix needs to obey some properties. Tersely put:

Definition. A Markov process consists of a set XX and a strongly continuous one-parameter group of stochastic operators on L 1(X)L^1(X).

But we should remind you what all these buzzwords mean! First, we’re using ‘stochastic state’ as an overly fancy synonym for ‘probability distribution’:

Definition. A stochastic state is a function ψ:X[0,)\psi: X \to [0,\infty) with

iXψ i=1 \sum_{i \in X} \psi_i = 1

We think of stochastic states as living in the vector space

L 1(X)={ψ:X: X|ψ(x)|dx<}L^1(X) = \{ \psi: X \to \mathbb{R} \; : \; \int_X |\psi(x)| \, d x &lt; \infty \}

so we say:

Definition. An operator U:L 1(X)L 1(X)U : L^1(X) \to L^1(X) is stochastic if it maps stochastic states to stochastic states.

We want to describe time evolution in our Markov process using a ‘one-parameter semigroup’ of stochastic operators:

Definition. A one-parameter semigroup of stochastic operators consists of a stochastic operator U(t)U(t) for each time t0t \ge 0, obeying

U(0)=I U(0) = I

and

U(t)U(s)=U(t+s) U(t) U(s) = U(t+s)

But it will be ridiculous if our semigroup isn’t continuous in a certain sense:

Definition. A one-parameter semigroup of stochastic operators U(t)U(t) is continuous if

t itU(t i)ψU(t)ψ t_i \to t \quad \implies \quad U(t_i) \psi \to U(t) \psi

where convergence is in the usual norm topology on L 1(X)L^1(X)….

an infinitesimal stochastic matrix H ijH_{i j}, where i,jXi, j \in X.

We’ll remind you what ‘infinitesimal stochastic’ means in a second, but the point is that for iji \ne j, the matrix element H ijH_{i j} describes the probability per unit time of hopping from the state jXj \in X to the state iXi \in X. So, if ψ i(t)\psi_i(t) is the probability of our system being in the state ii at time tt, we want this equation to hold…

Stochastic mechanics versus quantum mechanics

Let’s recall the analogy we began sketching in Part 5, and push it a bit further. The idea is that stochastic mechanics differs from quantum mechanics in two big ways.

First, instead of complex amplitudes, it uses nonnegative real probabilities. The complex numbers form a ring; the nonnegative real numbers form a mere rig, which is a ‘ring without negatives’. Rigs are much neglected in the typical math curriculum, but unjustly so: they’re almost as good as rings in many ways, and there are lots of important examples, like the natural numbers \mathbb{N} and the nonnegative real numbers, [0,)[0,\infty). For probability theory we should learn to love rigs!

Second, instead of the inner product of two functions, it uses the integral of one function. More on that in a minute.

We’ll start with a set XX whose points are states that a system can be in. (We could generalize everything so that XX is a measure space, but this section may already be too mathematical for some of you!) Then:

• In quantum mechanics, we may instead say that the system has an amplitude ψ(x)\psi(x) of being in any state xXx \in X. These amplitudes are complex numbers with

xX|ψ(x)| 2=1\sum_{x \in X} | \psi(x) |^2 = 1

We call ψ:X\psi: X \to \mathbb{C} obeying this equation a quantum state.

• In stochastic mechanics, we say that our system has a probability ψ(x)\psi(x) of being in any state xXx \in X. These probabilities are nonnegative real numbers with

xXψ(x)=1\sum_{x \in X} \psi(x) = 1

We call ψ:X[0,)\psi: X \to [0,\infty) obeying this equation a stochastic state.

In quantum mechanics we often use this abbreviation:

ϕ,ψ= xXϕ¯(x)ψ(x) \langle \phi, \psi \rangle = \sum_{x \in X} \overline{\phi}(x) \psi(x)

so that a quantum state has

ψ,ψ=1 \langle \psi, \psi \rangle = 1

Similarly, we could introduce this notation in stochastic mechanics:

ψ= xXψ(x) \langle \psi \rangle = \sum_{x \in X} \psi(x)

so that a stochastic state has

ψ=1 \langle \psi \rangle = 1

This notation is a bit risky, since angle brackets of this sort often stand for expectation values of an observable. In Part 5 we used ψ\int \psi instead of ψ\langle \psi \rangle, because of this worry. But let’s try it today—we’ll see it’s not so bad.

In quantum mechanics, ϕ,ψ\langle \phi, \psi \rangle is well-defined whenever both ϕ\phi and ψ\psi live in the space

L 2(X)={ψ:X: X|ψ(x)| 2dx<}L^2(X) = \{ \psi: X \to \mathbb{C} \; : \; \int_X |\psi(x)|^2 \, d x &lt; \infty \}

In stochastic mechanics, ψ\langle \psi \rangle is well-defined whenever ψ\psi lives in the space

L 1(X)={ψ:X: X|ψ(x)| 2dx<}L^1(X) = \{ \psi: X \to \mathbb{R} \; : \; \int_X |\psi(x)|^2 \, d x &lt; \infty \}

You’ll notice I wrote \mathbb{R} rather than [0,)[0,\infty) here. It’s annoying but unavoidable fact that in some calculations we need functions that take negative values, even though our stochastic states are nonnegative.

A while ago we mentioned observables. How they enter the game? An observable is something we can measure in a state; we may get different answers when we repeat the measurement, in a random sort of way, but there’s a nice formula for average or ‘expected’ value.

• In quantum mechanics, an observable is a self-adjoint linear operator AA on L 2(X)L^2(X). The expected value of AA in the state ψ\psi is

ψ,Aψ \langle \psi, A \psi \rangle

Here we’re using the fact that we can apply AA to ψ\psi and get a new vector AψL 2(X)A \psi \in L^2(X), at least if AA is bounded.

• In stochastic mechanics, an observable is a real-valued function AA on XX. The expected value of AA in the state ψ\psi is

Aψ \langle A \psi \rangle

Here we’re using the fact that we can multiply AA and ψ\psi and get a new vector AψL 1(X)A \psi \in L^1(X), at least if AA is bounded.

Besides states and observables, we need transformations that map states to states. We use these to describe how our system evolves in time, for example.

• In quantum mechanics, an isometry is a linear map U:L 2(X)L 2(X)U: L^2(X) \to L^2(X) such that

Uϕ,Uψ=ϕ,ψ \langle U \phi, U \psi \rangle = \langle \phi, \psi \rangle

for all ψ,ϕL 2(X)\psi, \phi \in L^2(X).

• In stochastic mechanics, a stochastic operator is a linear map U:L 1(X)L 1(X)U: L^1(X) \to L^1(X) such that

Uψ=ψ \langle U \psi \rangle = \langle \psi \rangle

for all ψL 1(X)\psi \in L^1(X).

In quantum mechanics we are mainly interested in invertible isometries, which are called unitary operators. There are, however, very few invertible stochastic operators. This is why in quantum mechanics we think of time evolution as being reversible, but not in stochastic mechanics.

Puzzle 1. - Suppose XX is a finite set. What are the invertible stochastic operators U:L 1(X)L 1(X)U: L^1(X) \to L^1(X)?

Now, in quantum mechanics there’s a beautiful one-to-one correspondence between observables and symmetries, which goes like this. Suppose we have some unitary operators U(t):L 2(X)L 2(X)U(t) : L^2(X) \to L^2(X) depending on tt \in \mathbb{R}, and forming a 1-parameter group:

U(0)=1 U(0) = 1
U(s+t)=U(s)U(t) U(s+t) = U(s) U(t)

Then if U(t)ψU(t) \psi depends continuously on tt for all ψ\psi, we can always write

U(t)=exp(itH) U(t) = \exp(-i t H)

for some observable HH. We insert the ii so that HH will be self-adjoint. For example, when U(t)U(t) describes ‘evolution in time’, we call HH the Hamiltonian, or energy.

How do we get this observable? We just define

iHψ=ddtU(t)ψ| t=0 -i H \psi = \left.\frac{d}{d t} U(t) \psi \right|_{t = 0}

It’s not obvious, but this derivative is well-defined, at least for a dense set of vectors ψ\psi, and the resulting HH has U(t)=exp(itH)U(t) = \exp(-i t H). This is Stone’s theorem; we haven’t given a precise statement, so click on the link if you want one. Moreover: if we let

ψ(t)=exp(itH)ψ \psi(t) = \exp(-i t H) \psi

for some vector on which HH is well-defined, we get Schrödinger’s equation

ddtψ(t)=iHψ(t) \frac{d}{d t} \psi(t) = - i H \psi(t)

In stochastic mechanics, we don’t have a correspondence between observables and symmetries! Suppose we have some stochastic operators U(t):L 2(X)L 2(X)U(t) : L^2(X) \to L^2(X) depending on t[0,)t \in [0,\infty), and forming a 1-parameter semigroup:

U(0)=1 U(0) = 1
U(s+t)=U(s)U(t) U(s+t) = U(s) U(t)

where the ‘semi’ means that we can only evolve forwards in time, not backwards. Then if U(t)ψU(t) \psi depends continuously on tt for all ψ\psi, we can always write

U(t)=exp(tH) U(t) = \exp(t H)

for some operator HH, defined by

Hψ=ddtU(t)ψ| t=0 H \psi = \left.\frac{d}{d t} U(t) \psi \right|_{t = 0}

Here we don’t include an ii since we’re working with real numbers, and HH wouldn’t be self-adjoint even if we did include the ii. Instead, HH is infinitesimal stochastic.

(Lots of people include a minus sign in these equations, but I’m not doing that, for some silly reason.)

Conversely, starting from a self-adjoint operator we can

A quantum version of Noether's theorem

S ***

(Fact: N m̲=(a ) ma mN^{\underline m} = (a^\dagger)^m a^m)

Example of Noether's theorem

Fragmentary stuff:

And in fact, we can think of the total number of things as an operator on our space of formal power series:

N=N 1+N 2 N = N_1 + N_2

where N 1N_1 and N 2N_2 are the number operators we’ve seen so often before:

N i=a i a i=z iz i N_i = a_i^\dagger a_i = z_i \frac{\partial}{\partial z_i}

What exactly is the relation between NN and the total number of things? Part of the answer is this: the expected total number of things in any state Φ\Phi is given by

NΦ \sum N \Phi

Let’s see why!

First of all, what does this expression even mean? The funny ‘sum’ notation here was introduced in Part 6, but now I’m using it for power series in two variables instead of one. Namely, for any power series

Φ= n 1,n 2=0 ϕ n 1,n 2z 1 n 1z 2 n 2 \Phi = \sum_{n_1, n_2 = 0}^\infty \phi_{n_1, n_2} z_1^{n_1} z_2^{n_2}

we define

Φ= n 1,n 2=0 ϕ n 1,n 2 \sum \Phi = \sum_{n_1, n_2 = 0}^\infty \phi_{n_1, n_2}

Thus, Φ=1\sum \Phi = 1 whenever Φ\Phi is a state, since probabilities must sum to 1. But we also have

NΦ = n 1,n 2=0 ϕ n 1,n 2(z 1z 1+z 2z 2)z 1 n 1z 2 n 2 = n 1,n 2=0 (n 1+n 2)ϕ n 1,n 2z 1 n 1z 2 n 2 \begin{array}{ccl} N \Phi &=& \sum_{n_1, n_2 = 0}^\infty \phi_{n_1, n_2} (z_1 \frac{\partial}{\partial z_1} + z_2 \frac{\partial}{\partial z_2}) z_1^{n_1} z_2^{n_2} \\ &=& \sum_{n_1, n_2 = 0}^\infty (n_1 + n_2) \phi_{n_1, n_2} z_1^{n_1} z_2^{n_2} \end{array}

so that

Nϕ= n 1,n 2=0 (n 1+n 2)ϕ n 1,n 2 \sum N \phi = \sum_{n_1, n_2 = 0}^\infty (n_1 + n_2) \phi_{n_1, n_2}

And this is exactly the expected value of the total number of things.

So, for this and other reasons, we can think of the operator NN an ‘observable’ that counts the total number of things. Now, in quantum mechanics Noether’s theorem tells us that an observable is a conserved quantity—it doesn’t change with time—if it commutes with the Hamiltonian. So you should suspect that

[H,N]=0 [H,N] = 0

where the commutator [H,N][H,N] is defined to be HNNHH N - N H.

Puzzle 2. Is [H,N]=0[H,N] = 0?

If this is true, it should follow that HH will commute with any function of the operator NN, for example the function

δ(Nn) \delta(N - n)

where δ\delta is the Kronecker delta, which equals 1 at the origin and is zero elsewhere. This operator should be a projection operator, and it should project to an eigenspace of NN, say

{Φ:NΦ=nΦ} \{ \Phi : N \Phi = n \Phi \}

and get a new equilibrium state, say Φ n\Phi_n:

HΨ n=0 H \Psi_n = 0

category: blog