The Azimuth Project
Blog - network theory (Fong guest posts) (Rev #14, changes)

Showing changes from revision #13 to #14: Added | Removed | Changed

This page contains material on stochastic Petri nets and chemical reaction networks written by Brendan Fong. It may go into one or more blog posts. To discuss this page as it is being written, go to the Azimuth Forum.

Taking expectation values of the master equation

The master equation describes the time evolution of a probability distribution of pure states, while the rate equation describes the time evolution of an average value of the state of the system. You might think that this means that by taking the ‘expected value’ of the master equation, we get the rate equation. As it turns out, that’s only approximately true. But it’s exactly true if the state of the system takes the form of a product of Poisson distributions!

Let’s see how this goes. Remember that given a state Ψ= n kψ nz n\Psi = \sum_{n \in \mathbb{N}^k} \psi_n z^n, its expected value is E(Ψ)= n knψ n[0,) kE(\Psi) = \sum_{n \in \mathbb{N}^k} n\psi_n \in [0,\infty)^k. This means we take the sum of each pure state vector weighted by the probability our system is in that pure state. We’ll pretend for the moment that this sum always converges. Then

ddtE(Ψ)= n k τTr(τ)((n+n(τ)m(τ))n)n!(nm(τ))!ψ n. \frac{d}{d t} E(\Psi) = \sum_{n \in \mathbb{N}^k} \sum_{\tau \in T} r(\tau) ((n+n(\tau)-m(\tau))-n)\frac{n!}{(n-m(\tau))!} \psi_n.

To simplify this and make it look a bit more like the rate equation, we’ll write this as

ddtE(Ψ)= τTr(τ)( n kn!(nm(τ))!)ψ n(n(τ)m(τ)). \frac{d}{d t} E(\Psi) = \sum_{\tau \in T} r(\tau) (\sum_{n \in \mathbb{N}^k}\frac{n!}{(n-m(\tau))!} )\psi_n(n(\tau)-m(\tau)).

Then, remembering that we think of xx as the expected value E(Ψ)E(\Psi) of our mixed state, we can see that the rate equation and the master equation get along perfectly when

n kk!(km(τ))!ψ n=(E(Ψ)) m(τ) \sum_{n \in \mathbb{N}^k}\frac{k!}{(k-m(\tau))!} \psi_n = (E(\Psi))^{m(\tau)}

for each reaction τ\tau. This is not always true, but it is always true when Ψ\Psi takes the form of the product of independent Poisson distributions with parameter x=E(Ψ)[0,) kx = E(\Psi) \in [0, \infty)^k. That is, when Ψ\Psi has the form

Ψ=e x n kx nn!z n. \Psi = e^{-x}\sum_{n \in \mathbb{N}^k} \frac{x^n}{n!}z^n.

(Don’t forget we’re using an index free notation here, so e x= i=1 ke x ie^{-x} = \prod_{i = 1}^k e^{-x_i} and x n= i=1 kx i nx^n = \prod_{i=1}^k x_i^n!)

We can show this quite simply:

n kn!(nm(τ))!ψ n= n kn!(nm(τ))!x nn!e x=x m(τ)e x n kx nm(τ)(nm(τ))!=x m(τ)=(E(Ψ)) m(τ). \sum_{n \in \mathbb{N}^k}\frac{n!}{(n-m(\tau))!} \psi_n = \sum_{n \in \mathbb{N}^k}\frac{n!}{(n-m(\tau))!} \frac{x^n}{n!}e^{-x} = x^{m(\tau)}e^{-x}\sum_{n \in \mathbb{N}^k} \frac{x^{n-m(\tau)}}{(n-m(\tau))!} = x^{m(\tau)} = (E(\Psi))^{m(\tau)}.

From this we can see the Poisson distribution is special in the relationship between the master and rate equations.

Stuff we may not really need!

For what’s to come, it’ll be good to write the master equation using number operators. For each species ii, the number operator N iN_i is the product of the creation and annihilation operators for that species:

N i=a i a i N_i = a_i^\dagger a_i

Obviously then we have

(a i a i) p=N i p (a_i^\dagger a_i)^p = N_i^p

for any natural number pp. But more interestingly, we have

(1)a i pa i p=N i p̲ {a_i^\dagger}^p a_i^p = {N_i}^{\underline{p}}

where the underline here indicates a falling power:

N i p̲=N i(N i1)(N i2)(N ip+1) {N_i}^{\underline{p}} = N_i (N_i - 1) (N_i - 2) \cdots (N_i - p + 1)

Puzzle 2: Prove equation (1). If you get stuck, reread Part 6.

Given this, it makes sense to define

N n̲=N 1 n 1̲N k n k̲ N^{\underline n} = N_1^{\underline{n_1}} \cdots N_k^{\underline{n_k}}

for any vector n kn \in \mathbb{N}^k, since then we have

a na n=N n̲ {a^\dagger}^n a^n = N^{\underline n}

because annihilation and creation operators for different species always commute.

This lets us write the master equation a different way. We have

(a n(τ)a m(τ))a m(τ)=a n(τ)a m(τ)N m(τ)̲ ({a^\dagger}^{n(\tau)} - {a^\dagger}^{m(\tau)} ) \, a^{m(\tau)} = {a^\dagger}^{n(\tau)} a^{m(\tau)} - N^{\underline{m(\tau)}}

so the master equation, (eq:master), is equivalent to

(2)ddtΨ(t)= τTr(τ)(a n(τ)a m(τ)N m(τ)̲)Ψ(t) \frac{d}{d t} \Psi(t) = \sum_{\tau \in T} r(\tau) \, \left({a^\dagger}^{n(\tau)}a^{m(\tau)} -N^{\underline{m(\tau)}}\right) \, \Psi(t)

The stochastic version of Noether's Theorem

Motivation: HH is a Hamiltonian for a (finite) stochastic Petri net.

Observable: diagonal operator.

Write f= sSf(s)\int f = \sum_{s \in S} f(s).

Let SS be a countable set. H:L 1(S)L 1(S)H: L_1(S) \to L_1(S) an operator with ‘only finitely many nonzero entries in each column’ and such that e tHe^{t H} is stochastic for all t0t \ge 0 (HH is infinitesmial stochastic for all t0t \ge 0) – that is, HH is a Hamiltonian for a finite stochastic Petri net.

Let O:L 1(S)L 1(S)O: L_1(S) \to L_1(S) be a ‘diagonal’ operator.

Then

(3)[H,O]=0 [H,O] =0

if and only if

(4)ddtOΨ(t)=ddtO 2Ψ(t)=0 \frac{d}{d t} \int O\Psi(t) = \frac{d}{d t} \int O^2\Psi(t) = 0

for all evolutions of states Ψ(t)=e tHΨ 0\Psi(t) = e^{t H} \Psi_0.

We first prove that if [H,O]=0[H,O]=0 then ddtOΨ(t)=0\frac{d}{d t} \int O\Psi(t) = 0 and ddtO 2Ψ(t)=0\frac{d}{d t} \int O^2\Psi(t) = 0. In fact this is true for all powers of our observable OO. As our integral and OO do not depend on tt, we may pass the differentiation operator through them to see

(5)ddtO nΨ=O nddtΨ. \frac{d}{d t} \int O^n\Psi = \int O^n \frac{d}{d t} \Psi.

Now since ddtΨ=HΨ(t)\frac{d}{d t} \Psi = H\Psi(t),

(6)ddtO nΨ=O nHΨ. \frac{d}{d t} \int O^n\Psi = \int O^n H \Psi.

But by our hypothesis, OO and HH commute! So

(7)ddtO nΨ=HO nΨ. \frac{d}{d t} \int O^n\Psi = \int H O^n \Psi.

Since Hχ s=0\int H \chi_s =0 for all characteristic functions χ s\chi_s, and since the integral is linear, HΦ=0\int H \Phi=0 for all ΦL 1(S)\Phi \in L^1(S), and we thus arrive at the desired result:

(8)ddtOΨ(t)=0. \frac{d}{d t} \int O\Psi(t) = 0.

The backward direction is a bit trickier. We now assume that the rate of change of the integral of our observable and its square are zero, and wish to show that this implies that our observable and Hamiltonian commute. More explicitly, we observe that our hypotheses are that

(9)ddtOΨ=OHΨ=0 \frac{d}{d t} \int O\Psi = \int O H\Psi = 0

and

(10)ddtO 2Ψ=O 2HΨ=0 \frac{d}{d t} \int O^2\Psi = \int O^2H\Psi = 0

For elements ii, jj of SS, we write M ij=Mχ j(i)M_{i j} = M\chi_j(i). Thinking of ii and jj as states of our system, and MM as a Hamiltonian, this represents the rate at which the state jj contributes to the state ii. For our observable OO, we further abbreviate each O iiO_{i i} as O iO_i, since for iji\ne j these quantities are zero anyway.

Observe that

(11)[H,O] ij=(HOOH) ij= kS(H ikO kjO ikH ij)=H ijO jO iH ij=(O jO i)H ij. [H,O]_{i j} = (H O-O H)_{i j} = \sum_{k \in S} (H_{i k}O_{k j} - O_{i k}H_{i j}) = H_{i j}O_j -O_i H_{i j} = (O_j-O_i)H_{i j}.

Since we want to show that this is zero for each pair of elements of SS, it suffices to show that when H ij0H_{i j} \ne 0, then O j=O iO_j = O_i. That is, we need to show that if the system can move from state jj to state ii, then the observable takes the same value on these two states.

Fix the element jj, and consider the sum

(12) iS(O jO i) 2H ij. \sum_{i \in S} (O_j-O_i)^2H_{i j}.

Since jj is fixed, only finitely many of the H ijH_{i j} are nonzero, and so this sum always exists. Expanding this, we have

(13) iS(O jO i) 2H ij= iSO j 2H ij2O jO iH ij+O i 2H ij). \sum_{i \in S} (O_j-O_i)^2H_{i j} = \sum_{i \in S} O_j^2 H_{i j}- 2O_j O_i H_{i j} +O_i^2H_{i j}).

But this sum splits as

(14)O j 2 iSH ij2O j iSO iH ij+ iSO i 2H ij, O_j^2\sum_{i \in S}H_{i j}- 2O_j\sum_{i \in S}O_i H_{i j} +\sum_{i \in S}O_i^2H_{i j},

and these three terms are each zero: the first because HH is infinitesmial stochastic, and the latter two by our hypotheses. Thus we have that

(15) iS(O jO i) 2H ij=0. \sum_{i \in S} (O_j-O_i)^2H_{i j} = 0.

Now when i=ji = j, O jO i=0O_j-O_i = 0, and thus (O jO i) 2H ij=0(O_j-O_i)^2H_{i j} =0. But when iji \ne j, (O jO i) 2(O_j-O_i)^2 and H ijH_{i j} are non-negative. Since they sum to zero, they must each be individually zero. Thus for all iji \ne j, we have

(16)(O jO i) 2H ij=0. (O_j-O_i)^2H_{i j}=0.

This shows that either O i=O jO_i = O_j or H ij=0H_{i j} = 0, which is what we wanted to show.


(Fact: N m̲=(a ) ma mN^{\underline m} = (a^\dagger)^m a^m)

Example of Noether's theorem

Fragmentary stuff:

And in fact, we can think of the total number of things as an operator on our space of formal power series:

N=N 1+N 2 N = N_1 + N_2

where N 1N_1 and N 2N_2 are the number operators we’ve seen so often before:

N i=a i a i=z iz i N_i = a_i^\dagger a_i = z_i \frac{\partial}{\partial z_i}

What exactly is the relation between NN and the total number of things? Part of the answer is this: the expected total number of things in any state Φ\Phi is given by

NΦ \sum N \Phi

Let’s see why!

First of all, what does this expression even mean? The funny ‘sum’ notation here was introduced in Part 6, but now I’m using it for power series in two variables instead of one. Namely, for any power series

Φ= n 1,n 2=0 ϕ n 1,n 2z 1 n 1z 2 n 2 \Phi = \sum_{n_1, n_2 = 0}^\infty \phi_{n_1, n_2} z_1^{n_1} z_2^{n_2}

we define

Φ= n 1,n 2=0 ϕ n 1,n 2 \sum \Phi = \sum_{n_1, n_2 = 0}^\infty \phi_{n_1, n_2}

Thus, Φ=1\sum \Phi = 1 whenever Φ\Phi is a state, since probabilities must sum to 1. But we also have

NΦ = n 1,n 2=0 ϕ n 1,n 2(z 1z 1+z 2z 2)z 1 n 1z 2 n 2 = n 1,n 2=0 (n 1+n 2)ϕ n 1,n 2z 1 n 1z 2 n 2 \begin{array}{ccl} N \Phi &=& \sum_{n_1, n_2 = 0}^\infty \phi_{n_1, n_2} (z_1 \frac{\partial}{\partial z_1} + z_2 \frac{\partial}{\partial z_2}) z_1^{n_1} z_2^{n_2} \\ &=& \sum_{n_1, n_2 = 0}^\infty (n_1 + n_2) \phi_{n_1, n_2} z_1^{n_1} z_2^{n_2} \end{array}

so that

Nϕ= n 1,n 2=0 (n 1+n 2)ϕ n 1,n 2 \sum N \phi = \sum_{n_1, n_2 = 0}^\infty (n_1 + n_2) \phi_{n_1, n_2}

And this is exactly the expected value of the total number of things.

So, for this and other reasons, we can think of the operator NN an ‘observable’ that counts the total number of things. Now, in quantum mechanics Noether’s theorem tells us that an observable is a conserved quantity—it doesn’t change with time—if it commutes with the Hamiltonian. So you should suspect that

[H,N]=0 [H,N] = 0

where the commutator [H,N][H,N] is defined to be HNNHH N - N H.

Puzzle 2. Is [H,N]=0[H,N] = 0?

If this is true, it should follow that HH will commute with any function of the operator NN, for example the function

δ(Nn) \delta(N - n)

where δ\delta is the Kronecker delta, which equals 1 at the origin at zero elsewhere. This operator should be a projection operator, and it should project to an eigenspace of NN, say

{Φ:NΦ=nΦ} \{ \Phi : N \Phi = n \Phi \}

and get a new equilibrium state, say Φ n\Phi_n:

HΨ n=0 H \Psi_n = 0

category: blog