The Azimuth Project
Blog - network theory (part 24) (changes)

Showing changes from revision #9 to #10: Added | Removed | Changed

This page is a blog article in progress, written by John Baez . To see discussions of this article while it is was being written, go to theAzimuth Forum. For the final polished version, go to the Azimuth Blog.

Now we’ve reached the climax of our story so far: we’re ready to prove the deficiency zero theorem. First let’s talk about it informally a bit. Then we’ll review the notation, and then—hang on to your seat!—we’ll give the proof.

The crucial trick is to relate a bunch of chemical reactions, described by a ‘reaction network’ like this:

to a simpler problem where a system randomly hops between states arranged in the same pattern:

This is sort of amazing, because we’ve thrown out lots of detail. It’s also amazing because this simpler problem is linear. In the original problem, the chance that a reaction turns a B + E into a D is proportional to the number of B’s times the number of E’s. That’s nonlinear! But in the simplified problem, the chance that your system hops from state 4 to state 3 is just proportional to the probability that it’s in state 4 to begin with. That’s linear.

The wonderful thing is that, at least under some conditions, we can an find an equilibrium for our chemical reactions starting from an equilibrium solution of the simpler problem.

Let’s roughly sketch how it works, and where we are so far. Our simplified problem is described by an equation like this:

ddtψ=Hψ \displaystyle{ \frac{d}{d t} \psi = H \psi }

where ψ\psi is a function that the probability of being in each state, and HH describes the probability per time of hopping from one state to another. We can easily understand quite a lot about the equilibrium solutions, where ψ\psi doesn’t change at all:

Hψ=0 H \psi = 0

because this is a linear equation. We did this in Part 23. Of course, when I say ‘easily’, that’s a relative thing: we needed to use the Perron–Frobenius theorem, which Jacob introduced in Part 20. But that’s a well-known theorem in linear algebra, and it’s easy to apply here.

In Part 22, we saw that the original problem was described by an equation like this, called the ‘rate equation’:

dxdt=YHx Y \displaystyle{ \frac{d x}{d t} = Y H x^Y }

Here xx is a vector whose entries describe the amount of each kind of chemical: the amount of A’s, the amount of B’s, and so on. The matrix HH is the same as in the simplified problem, but YY is a matrix that says how many times each chemical shows up in each spot in our reaction network:

The key thing to notice is x Yx^Y, where we take a vector and raise it to the power of a matrix. We explained this operation back in Part 22. It’s this operation that says how many B + E pairs we have, for example, given the number of B’s and the number of E’s. It’s this that makes the rate equation nonlinear.

Now, we’re looking for equilibrium solutions of the rate equation, where the rate of change is zero:

YHx Y=0 Y H x^Y = 0

But in fact we’ll do even better! We’ll find solutions of this:

Hx Y=0 H x^Y = 0

And we’ll get these by taking our solutions of this:

Hψ=0 H \psi = 0

and adjusting them so that

ψ=x Y \psi = x^Y

while ψ\psi remains a solution of Hψ=0H \psi = 0.

But: how do we do this ‘adjusting’? That’s the crux of the whole business! That’s what we’ll do today.

Remember, ψ\psi is a function that gives a probability for each ‘state’, or numbered box here:

The picture here consists of two pieces, called ‘connected components’: the piece containing boxes 0 and 1, and the piece containing boxes 2, 3 and 4. It turns out that we can multiply ψ\psi by a function that’s constant on each connected component, and if we had Hψ=0H \psi = 0 to begin with, that will still be true afterward. The reason is that there’s no way for ψ\psi to ‘leak across’ from one component to another. It’s like having water in separate buckets. You can increase the amount of water in one bucket, and decrease it another, and as long as the water’s surface remains flat in each bucket, the whole situation remains in equilibrium.

That’s sort of obvious. What’s not obvious is that we can adjust ψ\psi this way so as to ensure

ψ=x Y \psi = x^Y

for some xx.

And indeed, it’s not always true! It’s only true if our reaction network obeys a special condition. It needs to have ‘deficiency zero’. We defined this concept back in Part 21, but now we’ll finally use it for something. It turns out to be precisely the right condition to guarantee we can tweak any function on our set of states, multiplying it by a function that’s constant on each connected component, and get a new function ψ\psi with

ψ=x Y \psi = x^Y

When all is said and done, that is the key to the deficiency zero theorem.

Review

The battle is almost upon us—we’ve got one last chance to review our notation. We start with a stochastic reaction network:

This consists of:

• finite sets of transitions TT, complexes KK and species SS,

• a map r:T(0,)r: T \to (0,\infty) giving a rate constant for each transition,

source and target maps s,t:TKs,t : T \to K saying where each transition starts and ends,

• a one-to-one map Y:K SY : K \to \mathbb{N}^S saying how each complex is made of species.

Then we extend s,ts, t and YY to linear maps:

Then we put inner products on these vector spaces as described last time, which lets us ‘turn around’ the maps ss and tt by taking their adjoints:

s ,t : K T s^\dagger, t^\dagger : \mathbb{R}^K \to \mathbb{R}^T

More surprisingly, we can ‘turn around’ YY and get a nonlinear map using ‘matrix exponentiation’:

S K x x Y \begin{array}{ccc} \mathbb{R}^S &\to& \mathbb{R}^K \\ x &\mapsto& x^Y \end{array}

This is most easily understood by thinking of xx as a row vector and YY as a matrix:

x Y = (x 1, x 2, , x k) (Y 11 Y 12 Y 1 Y 21 Y 22 Y 2 Y k1 Y k2 Y k) = (x 1 Y 11x k Y k1,,x 1 Y 1x k Y k) \begin{array}{ccl} x^Y &=& {\left( \begin{array}{cccc} x_1 , & x_2 , & \dots, & x_k \end{array} \right)}^{ \left( \begin{array}{cccc} Y_{11} & Y_{12} & \cdots & Y_{1 \ell} \\ Y_{21} & Y_{22} & \cdots & Y_{2 \ell} \\ \vdots & \vdots & \ddots & \vdots \\ Y_{k1} & Y_{k2} & \cdots & Y_{k \ell} \end{array} \right)} \\ \\ &=& \left( x_1^{Y_{11}} \cdots x_k^{Y_{k1}} ,\; \dots, \; x_1^{Y_{1 \ell}} \cdots x_k^{Y_{k \ell}} \right) \end{array}

Remember, complexes are made out of species. The matrix entry Y ijY_{i j} says how many things of the jjth species there are in a complex of the iith kind. If ψ K\psi \in \mathbb{R}^K says how many complexes there are of each kind, Yψ SY \psi \in \mathbb{R}^S says how many things there are of each species. Conversely, if x Sx \in \mathbb{R}^S says how many things there are of each species, x Y Kx^Y \in \mathbb{R}^K says how many ways we can build each kind of complex from them.

So, we get these maps:

Next, the boundary operator

: T K \partial : \mathbb{R}^T \to \mathbb{R}^K

describes how each transition causes a change in complexes:

=ts \partial = t - s

As we saw last time, there is a Hamiltonian

H: K K H : \mathbb{R}^K \to \mathbb{R}^K

describing a Markov processes on the set of complexes, given by

H=s H = \partial s^\dagger

But the star of the show is the rate equation. This describes how the number of things of each species changes with time. We write these numbers in a list and get a vector x Sx \in \mathbb{R}^S with nonnegative components. The rate equation says:

dxdt=YHx Y \displaystyle{ \frac{d x}{d t} = Y H x^Y }

We can read this as follows:

xx says how many things of each species we have now.

x Yx^Y says how many complexes of each kind we can build from these species.

s x Ys^\dagger x^Y says how many transitions of each kind can originate starting from these complexes, with each transition weighted by its rate.

Hx Y=s x YH x^Y = \partial s^\dagger x^Y is the rate of change of the number of complexes of each kind, due to these transitions.

YHx YY H x^Y is the rate of change of the number of things of each species.

The zero deficiency theorem

We are looking for equilibrium solutions of the rate equation, where the number of things of each species doesn’t change at all:

YHx Y=0 Y H x^Y = 0

In fact we will find complex balanced equilibrium solutions, where even the number of complexes of each kind doesn’t change:

Hx Y=0 H x^Y = 0

More precisely, we have:

Deficiency Zero Theorem (Child’s Version). Suppose we have a reaction network obeying these two conditions:

  1. It is weakly reversible, meaning that whenever there’s a transition from one complex κ\kappa to another κ\kappa', there’s a directed path of transitions going back from κ\kappa' to κ\kappa.

  2. It has deficiency zero, meaning imkerY={0} \mathrm{im} \partial \cap \mathrm{ker} Y = \{ 0 \} .

Then for any choice of rate constants there exists a complex balanced equilibrium solution of the rate equation where all species are present in nonzero amounts. In other words, there exists x Sx \in \mathbb{R}^S with all components positive and such that:

Hx Y=0H x^Y = 0

Proof. Because our reaction network is weakly reversible, the theorems in Part 23 show there exists ψ(0,) K\psi \in (0,\infty)^K with

Hψ=0 H \psi = 0

This ψ\psi may not be of the form x Yx^Y, but we shall adjust ψ\psi so that it becomes of this form, while still remaining a solution of Hψ=0H \psi = 0 . To do this, we need a couple of lemmas:

Lemma 1. ker +imY = K\mathrm{ker} \partial^\dagger + \mathrm{im} Y^\dagger = \mathbb{R}^K.

Proof. We need to use a few facts from linear algebra. If VV is a finite-dimensional vector space with inner product, the orthogonal complement L L^\perp of a subspace LVL \subseteq V consists of vectors that are orthogonal to everything in LL:

L ={vV:wLv,w=0} L^\perp = \{ v \in V : \quad \forall w \in L \quad \langle v, w \rangle = 0 \}

We have

(LM) =L +M (L \cap M)^\perp = L^\perp + M^\perp

where LL and MM are subspaces of VV and ++ denotes the sum of two subspaces: that is, the smallest subspace containing both. Also, if T:VWT: V \to W is a linear map between finite-dimensional vector spaces with inner product, we have

(kerT) =imT (\mathrm{ker} T)^\perp = \mathrm{im} T^\dagger

and

(imT) =kerT (\mathrm{im} T)^\perp = \mathrm{ker} T^\dagger

Now, because our reaction network has deficiency zero, we know that

imkerY={0} \mathrm{im} \partial \cap \mathrm{ker} Y = \{ 0 \}

Taking the orthogonal complement of both sides, we get

(imkerY) = K (\mathrm{im} \partial \cap \mathrm{ker} Y)^\perp = \mathbb{R}^K

and using the rules we mentioned, we obtain

ker +imY = K \mathrm{ker} \partial^\dagger + \mathrm{im} Y^\dagger = \mathbb{R}^K

as desired.   █

Now, given a vector ϕ\phi in K\mathbb{R}^K or S\mathbb{R}^S with all positive components, we can define the logarithm of such a vector, componentwise:

(lnϕ) i=ln(ϕ i) (\ln \phi)_i = \ln (\phi_i)

Similarly, for any vector ϕ\phi in either of these spaces, we can define its exponential in a componentwise way:

(expϕ) i=exp(ϕ i) (\exp \phi)_i = \exp(\phi_i)

These operations are inverse to each other. Moreover:

Lemma 2. The nonlinear operator

S K x x Y \begin{array}{ccc} \mathbb{R}^S &\to& \mathbb{R}^K \\ x &\mapsto& x^Y \end{array}

is related to the linear operator

S K x Y x \begin{array}{ccc} \mathbb{R}^S &\to& \mathbb{R}^K \\ x &\mapsto& Y^\dagger x \end{array}

by the formula

x Y=exp(Y lnx) x^Y = \exp(Y^\dagger \ln x )

which holds for all x(0,) Sx \in (0,\infty)^S.

Proof. A straightforward calculation. By the way, this formula would look a bit nicer if we treated lnx\ln x as a row vector and multiplied it on the right by YY: then we would have

x Y=exp((lnx)Y) x^Y = \exp((\ln x) Y)

The problem is that we are following the usual convention of multiplying vectors by matrices on the left, yet writing the matrix on the right in x Yx^Y. Taking the transpose Y Y^\dagger of the matrix YY serves to compensate for this.   █

Now, given our vector ψ(0,) K\psi \in (0,\infty)^K with Hψ=0H \psi = 0, we can take its logarithm and get lnψ K\ln \psi \in \mathbb{R}^K. Lemma 1 says that

K=ker +imY \mathbb{R}^K = \mathrm{ker} \partial^\dagger + \mathrm{im} Y^\dagger

so we can write

lnψ=α+Y β \ln \psi = \alpha + Y^\dagger \beta

where αker \alpha \in \mathrm{ker} \partial^\dagger and β S\beta \in \mathbb{R}^S. Moreover, we can write

β=lnx \beta = \ln x

for some x(0,) Sx \in (0,\infty)^S, so that

lnψ=α+Y (lnx) \ln \psi = \alpha + Y^\dagger (\ln x)

Exponentiating both sides componentwise, we get

ψ=exp(α)exp(Y (lnx)) \psi = \exp(\alpha) \; \exp(Y^\dagger (\ln x))

where at right we are taking the componentwise product of vectors. Thanks to Lemma 2, we conclude that

ψ=exp(α)x Y \psi = \exp(\alpha) x^Y

So, we have taken ψ\psi and almost written it in the form x Yx^Y—but not quite! We can adjust ψ\psi to make it be of this form:

exp(α)ψ=x Y \exp(-\alpha) \psi = x^Y

Clearly all the components of exp(α)ψ\exp(-\alpha) \psi are positive, since the same is true for both ψ\psi and exp(α)\exp(-\alpha). So, the only remaining task is to check that

H(exp(α)ψ)=0 H(\exp(-\alpha) \psi) = 0

We do this using two lemmas:

Lemma 3. If Hψ=0H \psi = 0 and αker \alpha \in \mathrm{ker} \partial^\dagger, then H(exp(α)ψ)=0H(\exp(-\alpha) \psi) = 0.

Proof. It is enough to check that multiplication by exp(α)\exp(-\alpha) commutes with the Hamiltonian HH, since then

H(exp(α)ψ)=exp(α)Hψ=0 H(\exp(-\alpha) \psi) = \exp(-\alpha) H \psi = 0

Recall from Part 23 that HH is the Hamiltonian of a Markov process associated to this ‘graph with rates’:

As noted here:

• John Baez and Brendan Fong, A Noether theorem for Markov processes.

multiplication by some function on KK commutes with HH if and only if that function is constant on each connected component of this graph. Such functions are called conserved quantities.

So, it suffices to show that exp(α)\exp(-\alpha) is constant on each connected component. For this, it is enough to show that α\alpha itself is constant on each connected component. But this will follow from the next lemma, since αker \alpha \in \mathrm{ker} \partial^\dagger.   █

Lemma 4. A function α K\alpha \in \mathbb{R}^K is a conserved quantity iff α=0\partial^\dagger \alpha = 0 . In other words, α\alpha is constant on each connected component of the graph s,t:TKs, t: T \to K iff α=0\partial^\dagger \alpha = 0 .

Proof. Suppose α=0\partial^\dagger \alpha = 0, or in other words, αker \alpha \in \mathrm{ker} \partial^\dagger, or in still other words, α(im) \alpha \in (\mathrm{im} \partial)^\perp. To show that α\alpha is constant on each connected component, it suffices to show that whenever we have two complexes connected by a transition, like this:

τ:κκ \tau: \kappa \to \kappa'

then α\alpha takes the same value at both these complexes:

α κ=α κ \alpha_\kappa = \alpha_{\kappa'}

To see this, note

τ=t(τ)s(τ)=κκ \partial \tau = t(\tau) - s(\tau) = \kappa' - \kappa

and since α(im) \alpha \in (\mathrm{im} \partial)^\perp, we conclude

α,κκ=0 \langle \alpha, \kappa' - \kappa \rangle = 0

But calculating this inner product, we see

α κα κ=0 \alpha_{\kappa'} - \alpha_{\kappa} = 0

as desired.

For the converse, we simply turn the argument around: if α\alpha is constant on each connected component, we see α,κκ=0\langle \alpha, \kappa' - \kappa \rangle = 0 whenever there is a transition τ:κκ\tau : \kappa \to \kappa'. It follows that α,τ=0\langle \alpha, \partial \tau \rangle = 0 for every transition τ\tau, so α(im) \alpha \in (\mathrm{im} \partial)^\perp .

And thus concludes the proof of the lemma!   █

And thus concludes the proof of the theorem!   █

And thus concludes this post!

category: blog