Blog - network theory (part 13)

This page is a blog article in progress, written by John Baez. To see discussions of it as it was being written, go to the Azimuth Forum. To see the final published version, visit the Azimuth Blog.

Unlike some recent posts, this will be very short. I merely want to show you the quantum and stochastic versions of Noether’s theorem, side by side. Having made my sacrificial offering to the math gods last time by explaining how everything generalizes when we replace our finite set of states $X$ by an infinite set or an even more general measure space, I’ll now relax and state Noether’s theorem only when $X$ is a finite set.

Let me write the quantum and stochastic Noether’s theorem so they look almost the same:

**Theorem.** Let $X$ be a finite set. Suppose $H$ is a self-adjoint operator on ${L}^{2}(X)$, and let $O$ be an observable. Then

$$[H,O]=0$$

if and only if for all states $\psi (t)$ obeying Schrödinger’s equation

$$\frac{d}{dt}\psi (t)=-iH\psi (t)$$

the expected value of $O$ in the state $\psi (t)$ does not change with $t$.

**Theorem.** Let $X$ be a finite set. Suppose $H$ is an infinitesimal stochastic operator on ${L}^{1}(X)$, and let $O$ be an observable. Then

$$[O,H]=0$$

if and only if for all states $\psi (t)$ obeying the master equation

$$\frac{d}{dt}\psi (t)=H\psi (t)$$

the expected values of $O$ and ${O}^{2}$ in the state $\psi (t)$ do not change with $t$.

This makes the big difference stick out like a sore thumb: in the quantum version we only need the expected value of $O$, while in the stochastic version we need the expected values of $O$ and ${O}^{2}$!

Brendan Fong proved the stochastic version of Noether’s theorem in Part 11. Now let’s do the quantum version.

My statement of the quantum version was silly in a couple of ways. First, I spoke of the Hilbert space ${L}^{2}(X)$ for a finite set $X$, but any finite-dimensional Hilbert space will do equally well. Second, I spoke of the “self-adjoint operator” $H$ and the “observable” $O$, but in quantum mechanics an observable is the *same thing* as a self-adjoint operator!

Why did I talk in such a silly way? Because I was attempting to emphasize the similarity between quantum mechanics and stochastic mechanics. But they’re somewhat different. For example, in stochastic mechanics we have two very different concepts: infinitesimal stochastic operators, which generate *symmetries*, and functions on our set $X$, which are *observables*. But in quantum mechanics something wonderful happens: self-adjoint operators both *generate symmetries* and *are observables!* So, my attempt was a bit strained.

Let me state and prove a less silly quantum version of Noether’s theorem, which implies the one above:

**Theorem.** Suppose $H$ and $O$ are self-adjoint operators on a finite-dimensional Hilbert space. Then

$$[O,H]=0$$

if and only if for all states $\psi (t)$ obeying Schrödinger’s equation

$$\frac{d}{dt}\psi (t)=-iH\psi (t)$$

the expected value of $O$ in the state $\psi (t)$ does not change with $t$:

$$\frac{d}{dt}\u27e8\psi (t),O\psi (t)\u27e9=0$$

**Proof.** The trick is to compute the time derivative I just wrote down. Using Schrödinger’s equation, the product rule, and the fact that $H$ is self-adjoint we get:

$$\begin{array}{ccl}{\displaystyle \frac{d}{dt}\u27e8\psi (t),O\psi (t)\u27e9}& =& \u27e8-iH\psi (t),O\psi (t)\u27e9+\u27e8\psi (t),O(-iH\psi (t))\u27e9\\ & =& i\u27e8\psi (t),HO\psi (t)\u27e9-i\u27e8\psi (t),OH\psi (t))\u27e9\\ & =& -i\u27e8\psi (t),[O,H]\psi (t)\u27e9\end{array}$$

So, if $[O,H]=0$, clearly the above time derivative vanishes. Conversely, if this time derivative vanishes for all states $\psi (t)$ obeying Schrödinger’s equation, we know

$$\u27e8\psi ,[O,H]\psi \u27e9=0$$

for all states $\psi $ and thus all vectors in our Hilbert space. Does this imply $[O,H]=0$? Yes, because $i$ times a commutator of a self-adjoint operators is self-adjoint, and for any self-adjoint operator $A$ we have

$$\forall \psi \phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\u27e8\psi ,A\psi \u27e9=0\phantom{\rule{2em}{0ex}}\Rightarrow \phantom{\rule{2em}{0ex}}A=0$$

This is a well-known fact whose proof goes like this. Assume $\u27e8\psi ,A\psi \u27e9=0$ for all $\psi $. Then to show $A=0$, it is enough to show $\u27e8\varphi ,A\psi \u27e9=0$ for all $\varphi $ and $\psi $. But we have a marvelous identity:

(1)$$\begin{array}{ccl}\u27e8\varphi ,A\psi \u27e9& =& \frac{1}{4}(\u27e8\varphi +\psi ,\phantom{\rule{thinmathspace}{0ex}}A(\varphi +\psi )\u27e9\phantom{\rule{thickmathspace}{0ex}}-\phantom{\rule{thickmathspace}{0ex}}\u27e8\psi -\varphi ,\phantom{\rule{thinmathspace}{0ex}}A(\psi -\varphi )\u27e9\\ & & +i\u27e8\psi +i\varphi ,\phantom{\rule{thinmathspace}{0ex}}A(\psi +i\varphi )\u27e9\phantom{\rule{thickmathspace}{0ex}}-\phantom{\rule{thickmathspace}{0ex}}i\u27e8\psi -i\varphi ,\phantom{\rule{thinmathspace}{0ex}}A(\psi -i\varphi )\u27e9)\end{array}$$

and all four terms on the right vanish by our assumption. █

Equation (1) is sometimes called the **polarization identity**. In plain English, it says: if you know the diagonal entries of a self-adjoint matrix in every basis, you can figure out *all* the entries of that matrix in every basis.

Why is it called the ‘polarization identity’? I think because it shows up in optics, in the study of polarized light.

In both the quantum and stochastic cases, the time derivative of the expected value of an observable $O$ is expressed in terms of $[O,H]$. In the quantum case we have

$$\frac{d}{dt}\u27e8\psi (t),O\psi (t)\u27e9=-i\u27e8\psi (t),[O,H]\psi (t)\u27e9$$

and for the right side to *always* vanish, we need $[O,H]=0$, thanks to the polarization identity. In the stochastic case, a perfectly analogous equation holds:

$$\frac{d}{dt}\int O\psi (t)=\int [O,H]\psi (t)$$

but now the right side can always vanish even without $[O,H]=0$. We saw a counterexample in Part 11. There is nothing like the polarization identity to save us! To get $[O,H]=0$ we need a supplementary hypothesis, for example the vanishing of

$$\frac{d}{dt}\int {O}^{2}\psi (t)$$

Okay! Starting next time we’ll change gears and look at some more examples of stochastic Petri nets and Markov processes, including some from chemistry. After some more of that, I’ll move on to networks of other sorts. There’s a really big picture here, and I’m afraid I’ve been getting caught up in the details of a tiny corner.

category: blog