# The Azimuth Project Blog - network theory (Fong guest posts) (Rev #14, changes)

Showing changes from revision #13 to #14: Added | Removed | Changed

This page contains material on stochastic Petri nets and chemical reaction networks written by Brendan Fong. It may go into one or more blog posts. To discuss this page as it is being written, go to the Azimuth Forum.

#### Taking expectation values of the master equation

The master equation describes the time evolution of a probability distribution of pure states, while the rate equation describes the time evolution of an average value of the state of the system. You might think that this means that by taking the ‘expected value’ of the master equation, we get the rate equation. As it turns out, that’s only approximately true. But it’s exactly true if the state of the system takes the form of a product of Poisson distributions!

Let’s see how this goes. Remember that given a state $\Psi = \sum_{n \in \mathbb{N}^k} \psi_n z^n$, its expected value is $E(\Psi) = \sum_{n \in \mathbb{N}^k} n\psi_n \in [0,\infty)^k$. This means we take the sum of each pure state vector weighted by the probability our system is in that pure state. We’ll pretend for the moment that this sum always converges. Then

$\frac{d}{d t} E(\Psi) = \sum_{n \in \mathbb{N}^k} \sum_{\tau \in T} r(\tau) ((n+n(\tau)-m(\tau))-n)\frac{n!}{(n-m(\tau))!} \psi_n.$

To simplify this and make it look a bit more like the rate equation, we’ll write this as

$\frac{d}{d t} E(\Psi) = \sum_{\tau \in T} r(\tau) (\sum_{n \in \mathbb{N}^k}\frac{n!}{(n-m(\tau))!} )\psi_n(n(\tau)-m(\tau)).$

Then, remembering that we think of $x$ as the expected value $E(\Psi)$ of our mixed state, we can see that the rate equation and the master equation get along perfectly when

$\sum_{n \in \mathbb{N}^k}\frac{k!}{(k-m(\tau))!} \psi_n = (E(\Psi))^{m(\tau)}$

for each reaction $\tau$. This is not always true, but it is always true when $\Psi$ takes the form of the product of independent Poisson distributions with parameter $x = E(\Psi) \in [0, \infty)^k$. That is, when $\Psi$ has the form

$\Psi = e^{-x}\sum_{n \in \mathbb{N}^k} \frac{x^n}{n!}z^n.$

(Don’t forget we’re using an index free notation here, so $e^{-x} = \prod_{i = 1}^k e^{-x_i}$ and $x^n = \prod_{i=1}^k x_i^n$!)

We can show this quite simply:

$\sum_{n \in \mathbb{N}^k}\frac{n!}{(n-m(\tau))!} \psi_n = \sum_{n \in \mathbb{N}^k}\frac{n!}{(n-m(\tau))!} \frac{x^n}{n!}e^{-x} = x^{m(\tau)}e^{-x}\sum_{n \in \mathbb{N}^k} \frac{x^{n-m(\tau)}}{(n-m(\tau))!} = x^{m(\tau)} = (E(\Psi))^{m(\tau)}.$

From this we can see the Poisson distribution is special in the relationship between the master and rate equations.

#### Stuff we may not really need!

For what’s to come, it’ll be good to write the master equation using number operators. For each species $i$, the number operator $N_i$ is the product of the creation and annihilation operators for that species:

$N_i = a_i^\dagger a_i$

Obviously then we have

$(a_i^\dagger a_i)^p = N_i^p$

for any natural number $p$. But more interestingly, we have

(1)${a_i^\dagger}^p a_i^p = {N_i}^{\underline{p}}$

where the underline here indicates a falling power:

${N_i}^{\underline{p}} = N_i (N_i - 1) (N_i - 2) \cdots (N_i - p + 1)$

Puzzle 2: Prove equation (1). If you get stuck, reread Part 6.

Given this, it makes sense to define

$N^{\underline n} = N_1^{\underline{n_1}} \cdots N_k^{\underline{n_k}}$

for any vector $n \in \mathbb{N}^k$, since then we have

${a^\dagger}^n a^n = N^{\underline n}$

because annihilation and creation operators for different species always commute.

This lets us write the master equation a different way. We have

$({a^\dagger}^{n(\tau)} - {a^\dagger}^{m(\tau)} ) \, a^{m(\tau)} = {a^\dagger}^{n(\tau)} a^{m(\tau)} - N^{\underline{m(\tau)}}$

so the master equation, (eq:master), is equivalent to

(2)$\frac{d}{d t} \Psi(t) = \sum_{\tau \in T} r(\tau) \, \left({a^\dagger}^{n(\tau)}a^{m(\tau)} -N^{\underline{m(\tau)}}\right) \, \Psi(t)$

#### The stochastic version of Noether's Theorem

Motivation: $H$ is a Hamiltonian for a (finite) stochastic Petri net.

Observable: diagonal operator.

Write $\int f = \sum_{s \in S} f(s)$.

Let $S$ be a countable set. $H: L_1(S) \to L_1(S)$ an operator with ‘only finitely many nonzero entries in each column’ and such that $e^{t H}$ is stochastic for all $t \ge 0$ ($H$ is infinitesmial stochastic for all $t \ge 0$) – that is, $H$ is a Hamiltonian for a finite stochastic Petri net.

Let $O: L_1(S) \to L_1(S)$ be a ‘diagonal’ operator.

Then

(3)$[H,O] =0$

if and only if

(4)$\frac{d}{d t} \int O\Psi(t) = \frac{d}{d t} \int O^2\Psi(t) = 0$

for all evolutions of states $\Psi(t) = e^{t H} \Psi_0$.

We first prove that if $[H,O]=0$ then $\frac{d}{d t} \int O\Psi(t) = 0$ and $\frac{d}{d t} \int O^2\Psi(t) = 0$. In fact this is true for all powers of our observable $O$. As our integral and $O$ do not depend on $t$, we may pass the differentiation operator through them to see

(5)$\frac{d}{d t} \int O^n\Psi = \int O^n \frac{d}{d t} \Psi.$

Now since $\frac{d}{d t} \Psi = H\Psi(t)$,

(6)$\frac{d}{d t} \int O^n\Psi = \int O^n H \Psi.$

But by our hypothesis, $O$ and $H$ commute! So

(7)$\frac{d}{d t} \int O^n\Psi = \int H O^n \Psi.$

Since $\int H \chi_s =0$ for all characteristic functions $\chi_s$, and since the integral is linear, $\int H \Phi=0$ for all $\Phi \in L^1(S)$, and we thus arrive at the desired result:

(8)$\frac{d}{d t} \int O\Psi(t) = 0.$

The backward direction is a bit trickier. We now assume that the rate of change of the integral of our observable and its square are zero, and wish to show that this implies that our observable and Hamiltonian commute. More explicitly, we observe that our hypotheses are that

(9)$\frac{d}{d t} \int O\Psi = \int O H\Psi = 0$

and

(10)$\frac{d}{d t} \int O^2\Psi = \int O^2H\Psi = 0$

For elements $i$, $j$ of $S$, we write $M_{i j} = M\chi_j(i)$. Thinking of $i$ and $j$ as states of our system, and $M$ as a Hamiltonian, this represents the rate at which the state $j$ contributes to the state $i$. For our observable $O$, we further abbreviate each $O_{i i}$ as $O_i$, since for $i\ne j$ these quantities are zero anyway.

Observe that

(11)$[H,O]_{i j} = (H O-O H)_{i j} = \sum_{k \in S} (H_{i k}O_{k j} - O_{i k}H_{i j}) = H_{i j}O_j -O_i H_{i j} = (O_j-O_i)H_{i j}.$

Since we want to show that this is zero for each pair of elements of $S$, it suffices to show that when $H_{i j} \ne 0$, then $O_j = O_i$. That is, we need to show that if the system can move from state $j$ to state $i$, then the observable takes the same value on these two states.

Fix the element $j$, and consider the sum

(12)$\sum_{i \in S} (O_j-O_i)^2H_{i j}.$

Since $j$ is fixed, only finitely many of the $H_{i j}$ are nonzero, and so this sum always exists. Expanding this, we have

(13)$\sum_{i \in S} (O_j-O_i)^2H_{i j} = \sum_{i \in S} O_j^2 H_{i j}- 2O_j O_i H_{i j} +O_i^2H_{i j}).$

But this sum splits as

(14)$O_j^2\sum_{i \in S}H_{i j}- 2O_j\sum_{i \in S}O_i H_{i j} +\sum_{i \in S}O_i^2H_{i j},$

and these three terms are each zero: the first because $H$ is infinitesmial stochastic, and the latter two by our hypotheses. Thus we have that

(15)$\sum_{i \in S} (O_j-O_i)^2H_{i j} = 0.$

Now when $i = j$, $O_j-O_i = 0$, and thus $(O_j-O_i)^2H_{i j} =0$. But when $i \ne j$, $(O_j-O_i)^2$ and $H_{i j}$ are non-negative. Since they sum to zero, they must each be individually zero. Thus for all $i \ne j$, we have

(16)$(O_j-O_i)^2H_{i j}=0.$

This shows that either $O_i = O_j$ or $H_{i j} = 0$, which is what we wanted to show.

(Fact: $N^{\underline m} = (a^\dagger)^m a^m$)

#### Example of Noether's theorem

Fragmentary stuff:

And in fact, we can think of the total number of things as an operator on our space of formal power series:

$N = N_1 + N_2$

where $N_1$ and $N_2$ are the number operators we’ve seen so often before:

$N_i = a_i^\dagger a_i = z_i \frac{\partial}{\partial z_i}$

What exactly is the relation between $N$ and the total number of things? Part of the answer is this: the expected total number of things in any state $\Phi$ is given by

$\sum N \Phi$

Let’s see why!

First of all, what does this expression even mean? The funny ‘sum’ notation here was introduced in Part 6, but now I’m using it for power series in two variables instead of one. Namely, for any power series

$\Phi = \sum_{n_1, n_2 = 0}^\infty \phi_{n_1, n_2} z_1^{n_1} z_2^{n_2}$

we define

$\sum \Phi = \sum_{n_1, n_2 = 0}^\infty \phi_{n_1, n_2}$

Thus, $\sum \Phi = 1$ whenever $\Phi$ is a state, since probabilities must sum to 1. But we also have

$\begin{array}{ccl} N \Phi &=& \sum_{n_1, n_2 = 0}^\infty \phi_{n_1, n_2} (z_1 \frac{\partial}{\partial z_1} + z_2 \frac{\partial}{\partial z_2}) z_1^{n_1} z_2^{n_2} \\ &=& \sum_{n_1, n_2 = 0}^\infty (n_1 + n_2) \phi_{n_1, n_2} z_1^{n_1} z_2^{n_2} \end{array}$

so that

$\sum N \phi = \sum_{n_1, n_2 = 0}^\infty (n_1 + n_2) \phi_{n_1, n_2}$

And this is exactly the expected value of the total number of things.

So, for this and other reasons, we can think of the operator $N$ an ‘observable’ that counts the total number of things. Now, in quantum mechanics Noether’s theorem tells us that an observable is a conserved quantity—it doesn’t change with time—if it commutes with the Hamiltonian. So you should suspect that

$[H,N] = 0$

where the commutator $[H,N]$ is defined to be $H N - N H$.

Puzzle 2. Is $[H,N] = 0$?

If this is true, it should follow that $H$ will commute with any function of the operator $N$, for example the function

$\delta(N - n)$

where $\delta$ is the Kronecker delta, which equals 1 at the origin at zero elsewhere. This operator should be a projection operator, and it should project to an eigenspace of $N$, say

$\{ \Phi : N \Phi = n \Phi \}$

and get a new equilibrium state, say $\Phi_n$:

$H \Psi_n = 0$

category: blog