This page is a blog article in progress, written by John Baez and Jacob Biamonte. To see discussions of this article as it was being written, go to the Azimuth Forum. For the final polished version, visit the Azimuth Blog.
joint with Jacob Biamonte
Last time we explained how ‘reaction networks’, as used in chemistry, are just another way of talking about Petri nets. We stated an amazing result on reaction networks: Feinberg’s deficiency zero theorem. This settles quite a number of questions about chemical reactions. Now let’s illustrate it with an example.
Our example won’t show how powerful this theorem is: it’s too simple. But it’ll help explain the ideas involved.
A diatomic molecule consists of two atoms of the same kind, stuck together:
At room temperature there are 5 elements that are diatomic gases: hydrogen, nitrogen, oxygen, fluorine, chlorine. Bromine is a diatomic liquid, but easily evaporates into a diatomic gas:
Iodine is a crystal at room temperatures:
but if you heat it a bit, it becomes a diatomic liquid and then a gas:
so people often list it as a seventh member of the diatomic club.
When you heat any diatomic gas enough, it starts becoming a ‘monatomic’ gas as molecules break down into individual atoms. However, just as a diatomic molecule can break apart into two atoms:
two atoms can recombine to form a diatomic molecule:
So in equilibrium, the gas will be a mixture of diatomic and monatomic forms. The exact amount of each will depend on the temperature and pressure, since these affect the likelihood that two colliding atoms stick together, or a diatomic molecule splits apart. The detailed nature of our gas also matters, of course.
But we don’t need to get into these details here! Instead, we can just write down the ‘rate equation’ for the reactions we’re talking about. All the details we’re ignoring will be hiding in some constants called ‘rate constants’. We won’t try to compute these; we’ll leave that to our chemist friends.
To write down our rate equation, we start by drawing a ‘reaction network’. For this, we can be a bit abstract and call the diatomic molecule instead of . Then it looks like this:
We could write down the same information using a Petri net:
But today let’s focus on the reaction network! Staring at this picture, we can read off various things:
Species. The species are the different kinds of atoms, molecules, etc. In our example the set of species is .
Complexes. A complex is a finite sum of species, like , or , or for a fancier example, using more efficient notation . So, we can think of a complex as a vector . The complexes that actually show up in our reaction network form a set . In our example, .
Reactions. A reaction is an arrow going from one complex to another. In our example we have two reactions: and .
Chemists define a reaction network to be a triple where is a set of species, is the set of complexes that appear in the reactions, and is the set of reactions where . (Stochastic Petri net people call reactions transitions, hence the letter .)
So, in our example we have:
To get the rate equation, we also need one more piece of information: a rate constant for each reaction . This is a nonnegative real number that affects how fast the reaction goes. All the details of how our particular diatomic gas behaves at a given temperature and pressure are packed into these constants!
The rate equation says how the expected numbers of the various species—atoms, molecules and the like—changes with time. This equation is deterministic. It’s a good approximation when the numbers are large and any fluctuations in these numbers are negligible by comparison.
Here’s the general form of the rate equation:
Let’s take a closer look. The quantity is the expected population of the th species. So, this equation tells us how that changes. But what about the right hand side? As you might expect, it’s a sum over reactions. And:
The term for the reaction is proportional to the rate constant .
Each reaction goes between two complexes, so we can write it as . Among chemists the input is called the reactant complex, and the output is called the product complex. The difference tells us how many items of species get created, minus how many get destroyed. So, it’s the net amount of this species that gets produced by the reaction . The term for the reaction is proportional to this, too.
Finally, the law of mass action says that the rate of a reaction is proportional to the product of the concentrations of the species that enter as inputs. More precisely, if we have a reaction with input is the complex , we define . The law of mass action says the term for the reaction is proportional to this, too!
Let’s see what this says for the reaction network we’re studying:
Let’s use (resp. ) to stand for the population of species (resp. ). Let the rate constant for the reaction be , and let the rate constant for be . Then the rate equation is this:
This is a bit intimidating. However, we can solve it in closed form thanks to something every physicist knows to treasure: a conserved quantity.
We’ve got two species, and . But remember, is just an abbreviation for a molecule made of two atoms. So, the total number of atoms is conserved by the reactions , . This number is the number of ‘s plus twice the number of ’s: . So, this should be a conserved quantity: it should not change with time. Indeed, by adding the first equation above to twice the second, we see:
So, any solution will move along a line
for some constant . We can use this fact to rewrite the rate equation just in terms of :
and this is a separable differential equation, so we can solve it.
This sort of trick won’t work for more complicated examples. But the idea remains important: the numbers of atoms of various kinds—hydrogen, helium, lithium, and so on—are conserved by chemical reactions, so a solution of the rate equation can’t roam freely in . It will be trapped on some hypersurface, which is called the ‘stoichiometric subspace’. And this is very important.
We don’t feel like doing the integral required to solve our rate equation in closed form, because this idea doesn’t generalize too much. On the other hand, we can always solve the rate equation numerically. So let’s try that!
For example, suppose we set . We can plot the solutions for three different choices of initial conditions, say and . We get these graphs:
It looks like the solution always approaches an equilibrium. We seem to be getting different equilibria for different initial conditions, and the pattern is a bit mysterious. However, something nice happens when we plot the ratio :
Apparently it always converges to 1. Why should that be? It’s not terribly surprising. With both rate constants equal to 1, the reaction proceeds at a rate equal to the square of the number of ‘s, namely . The reverse reaction proceeds at a rate equal to the number of ’s, namely . So in equilibrium, we should have .
But why is the equilibrium stable? In this example we could see that using the closed-form solution, or maybe just common sense. But it also follows from a powerful theorem that handles a lot of reaction networks.
It’s called Feinberg’s deficiency zero theorem, and we saw it last time. Very roughly, it says that if our reaction network is ‘weakly reversible’ and has ‘deficiency zero’, the rate equation will have equilibrium solutions that behave about as nicely as you could want.
Let’s see how this works. We need to remember some jargon:
Weakly reversible. A reaction network is weakly reversible if for every reaction in the network, there exists a path of reactions in the network starting at and leading back to .
Reversible. A reaction network is reversible if for every reaction in the network, is also a reaction in the network. Any reversible reaction network is weakly reversible. Our example is reversible, since it consists of reactions , .
But what about ‘deficiency zero’? We defined that concept last time, but let’s review:
Stoichiometric subspace. The stoichiometric subspace is the subspace spanned by the vectors of the form for all reactions in our reaction network. In our example we have reactions and , which give vectors and , or if you prefer, and . These vectors are linearly dependent, so the stoichiometric subspace has dimension 1.
Deficiency. The deficiency of a reaction network is the number of complexes, minus the number of connected components, minus the dimension of the stoichiometric subspace. In our example there are 2 complexes, 1 connected component, and the dimension of the stoichiometric subspace is 1. So, our reaction network has deficiency 2 - 1 - 1 = 0.
So, the deficiency zero theorem applies! What does it say? To understand it, we need a bit more jargon. First of all, a vector tells us how much we’ve got of each species: the amount of species is the number . And then:
In our example, where the stoichiometric subspace is spanned by , the stoichiometric compatibility class of the vector is the line consisting of points
where the parameter ranges over all real numbers. Notice that this line can also be written as
We’ve already seen that if we start with initial conditions on such a line, the solution will stay on this line. And that’s how it always works: as time passes, any solution of the rate equation stays in the same stoichiometric compatibility class!
In other words: the stoichiometric subspace is defined by a bunch of linear equations, one for each linear conservation law that all the reactions in our network obey. Here by a linear conservation law we mean a law saying that some linear combination of the numbers of species does not change.
We now finally have enough jargon in our arsenal to state this result:
Zero Deficiency Theorem (Feinberg). If a reaction network is weakly reversible and the rate constants are positive, the rate equation has exactly one equilibrium solution in each positive stoichiometric compatibility class. Any sufficiently nearby solution that starts in the same class will approach this equilibrium as .
In our example, this theorem says there’s just one positive equilibrium in each line
We can find it by setting the time derivatives to zero:
Solving these, we get
So, these are our equilibrium solutions. It’s easy to verify that indeed, there’s one of these in each stoichiometric compatibility class . And the zero deficiency theorem also tells us that any sufficiently nearby solution that starts in the same class will approach this equilibrium as .
This partially explains what we saw before in our graphs. It shows that in the case , any solution that starts by nearly having
will actually have
But in fact, in this example we don’t even need to start near the equilibrium for our solution to approach the equilibrium! What about in general? We don’t know, but just to get the ball rolling, we’ll risk the following wild guess:
Conjecture. If a reaction network is weakly reversible and the rate constants are positive, the rate equation has exactly one equilibrium solution in each positive stoichiometric compatibility class, and any positive solution that starts in the same class will approach this equilibrium as .
If anyone knows a proof or counterexample, we’d be interested. If this result were true, it would really clarify the dynamics of reaction networks in the zero deficiency case.