This page is a blog article in progress, written by John Baez. To see discussions of it as it was being written, go the Azimuth Forum. For the final polished version, visit the Azimuth Blog.
We’ve been comparing two theories: stochastic mechanics and quantum mechanics. Last time we saw that any graph gives us an example of both theories! It’s a bit peculiar, but today we’ll explore the intersection of these theories a little further, and see that it has another interpretation. It’s also the theory of electrical circuits made of resistors!
That’s nice, because I’m supposed to be talking about ‘network theory’, and electrical circuits are perhaps the most practical networks of all:
I plan to talk a lot about electrical circuits. I’m not quite ready to dive in, but I can’t resist dipping my toe in the water today. Why don’t you join me? It’s not too cold!
Last time we saw that any graph gives us an operator called the ‘graph Laplacian’ that’s both infinitesimal stochastic and self-adjoint. That means we get both:
and
That’s sort of neat, so it’s natural to wonder what are all the operators that are both infinitesimal stochastic and self-adjoint. They’re called ‘Dirichlet operators’, and at least in the finite-dimensional case we’re considering, they’re easy to completely understand. Even better, it turns out they describe electrical circuits made of resistors!
Today let’s take a lowbrow attitude and think of a linear operator $H : \mathbb{C}^n \to \mathbb{C}^n$ as an $n \times n$ matrix with entries $H_{i j}$. Then:
$H$ is self-adjoint if it equals the conjugate of its transpose: $H_{i j} = \overline{H}_{j i}$
$H$ is infinitesimal stochastic if its columns sum to zero and its off-diagonal entries are nonnegative:
An operator that’s both self-adjoint and infinitesimal stochastic is called a Dirichlet operator.
What are Dirichlet operators like? Suppose $H$ is a Dirichlet operator. Then its off-diagonal entries are $\ge 0$, and since
its diagonal entries obey
So all the entries of the matrix $H$ are real, which in turn implies it’s symmetric:
So, we can build any Dirichlet operator $H$ as follows:
Choose the entries above the diagonal, $H_{i j}$ with $i \lt j$ to be arbitrary nonnegative real numbers.
The entries below the diagonal, $H_{i j}$ with $i \gt j$, are then forced on us by the requirement that $H$ be symmetric: $H_{i j} = H_{j i}$.
The diagonal entries are then forced on us by the requirement that the columns sum to zero: $H_{i i} = - \sum_{ i \ne j} H_{i j}$.
Note that because the entries are real, we can think of a Dirichlet operator as a linear operator $H : \mathbb{R}^n \to \mathbb{R}^n$. We’ll do that for the rest of today.
Now for the fun part. We can easily draw any Dirichlet operator! To this we draw $n$ dots, connect each pair of distinct dots with an edge, and label the edge connecting the $i$th dot to the $j$th with any number $H_{i j} \ge 0$.
This contains all the information we need to build our Dirichlet operator. To make the picture prettier, we can leave out the edges labelled by 0:
Like last time, the graphs I’m talking about are simple: undirected, with no edges from a vertex to itself, and at most one edge from one vertex to another. So:
Theorem. Any finite simple graph with edges labelled by positive numbers gives a Dirichlet operator, and conversely.
We already talked about a special case last time: if we label all the edges by the number 1, our operator $H$ is called the graph Laplacian. So, now we’re generalizing that idea by letting the edges have more interesting labels.
What’s the meaning of this trick? Well, we can think of our graph as an electrical circuit where the edges are wires. What do the numbers labelling these wires mean? One obvious possibility is to put a resistor on each wire, and let that number be its resistance. But that doesn’t make sense, since we’re leaving out wires labelled by 0. If we leave out a wire, that’s not like having a wire of zero resistance: it’s like having a wire of infinite resistance! No current can go through when there’s no wire. So the number labelling an edge should be the conductance of the resistor on that wire. Conductance is the reciprocal of resistance.
So, our Dirichlet operator above gives a circuit like this:
Here Ω is the symbol for an 'ohm', a unit of resistance… but the upside-down version, namely ℧, is the symbol for a 'mho', a unit of conductance that’s the reciprocal of an ohm.
Let’s see if this cute idea leads anywhere. Think of a Dirichlet operator $H : \mathbb{R}^n \to \mathbb{R}^n$ as a circuit made of resistors. What could a vector $\psi \in \mathbb{R}^n$ mean? It assigns a real number to each vertex of our graph. The only sensible option is for this number to be the electric potential at that point in our circuit. So let’s try that.
Now, what’s
In quantum mechanics this would be a very sensible thing to look at: it would be gives us the expected value of the Hamiltonian $H$ in a state $\psi$. But what does it mean in the land of electrical circuits?
Up to a constant fudge factor, it turns out to be the power consumed by the electrical circuit!
Let’s see why. First, remember that when a current flows along a wire, power gets consumed. In other words, electrostatic potential energy gets turned into heat. The power consumed is
where $V$ is the voltage across the wire and $I$ is the current flowing along the wire. If we assume our wire has resistance $R$ we also have Ohm’s law:
so
If we write this using the conductance $C = 1/R$ instead of the resistance $R$, we get
But our electrical circuit has lots of wires, so the power it consumes will be a sum of terms like this. We’re assuming $H_{i j}$ is the conductance of the wire from the $i$th vertex to the $j$th, or zero if there’s no wire connecting them. And by definition, the voltage across this wire is the difference in electrostatic potentials at the two ends: $\psi_i - \psi_j$. So, the total power consumed is
This is nice, but what does it have to do with $\langle \psi , H \psi \rangle$?
The answer is here:
Theorem. If $H : \mathbb{R}^n \to \mathbb{R}^n$ is any Dirichlet operator, and $\psi \in \mathbb{R}^n$ is any vector, then
Proof. Let’s start with the formula for power:
Note that the sum includes the condition $i \ne j$, since we only have wires going between distinct vertices. But the summand is zero if $i = j$, so
Expanding the square, we get
The middle term looks promisingly similar to $\langle \psi, H \psi \rangle$, but what about the other two terms? Because $H_{i j} = H_{j i}$, they’re equal:
And in fact they’re zero! Since $H$ is infinitesimal stochastic, we have
so
and it’s still zero when we sum over $j$. We thus have
But since $\psi_i$ is real, this is -2 times
So, we’re done. █
An instant consequence of this theorem is that a Dirichlet operator has
for all $\psi$. Actually most people use the opposite sign convention in defining infinitesimal stochastic operators. This makes $H_{i j} \le 0$, which is mildly annoying, but it gives
which is nice. When $H$ is a Dirichlet operator, defined with this opposite sign convention, $\langle \psi , H \psi \rangle$ is called a Dirichlet form.
Maybe it’s a good time to step back and see where we are.
So far we’ve been exploring the analogy between stochastic mechanics and quantum mechanics. Where do networks come in? Well, they’ve actually come in twice so far:
1) First we saw that Petri nets can be used to describe stochastic or quantum processes where things of different kinds randomly react and turn into other things. A Petri net is a kind of network like this:
The different kinds of things are the yellow circles; we called them states, because sometimes we think of them as different states of a single kind of thing. The reactions where things turn into other things are the blue squares: we called them transitions. We label the transitions by numbers to say the rates at which they occur. In stochastic mechanics these are nonnegative real numbers; in quantum mechanics they’re complex.
2) Then we looked at stochastic or quantum processes where in each transition a single thing turns into a single thing. We can draw these as Petri nets where each transition has just one state as input and one state as output. But we can also draw them as directed graphs with edges labelled by numbers:
Now the dark blue boxes are states and the edges are transitions!
Today we looked at a special case of the second kind of network: the Dirichlet operators. For these the ‘forward’ transition rate $H_{i j}$ equals the ‘reverse’ rate $H_{j i}$, so our graph can be undirected: no arrows on the edges. And for these the rates $H_{i i}$ are determined by the rest, so we can omit the edges from vertices to themselves:
And the result can be seen as an electrical circuit made of resistors! So we’re building up a little dictionary:
Stochastic mechanics: $\psi_i$ is a probability and $H_{i j}$ is a transition rate (probability per time).
Quantum mechanics: $\psi_i$ is an amplitude and $H_{i j}$ is a transition rate (amplitude per time).
Circuits made of resistors: $\psi_i$ is a voltage and $H_{i j}$ is a conductance.
This dictionary may seem rather odd. But that’s good: if it weren’t odd, it wouldn’t give us any new ideas.
While odd, it’s actually well-known. We’ve been looking at the discrete version, where we have a finite set of states. But in the continuum, the classic example of a Dirichlet operator is the Laplacian $H = \nabla^2$. And then we have:
is fundamental to stochastic mechanics.
is fundamental to quantum mechanics.
is fundamental to electrostatics.
Briefly speaking, electrostatics is the study of how the electric potential $\psi$ depends on the charge density $\rho$. The theory of electrical circuits made of resistors can be seen as a special case, at least when the current isn’t changing with time.
I’ll say a lot more about this… but not today! If you want to learn more, this is a great place to start:
This free online book explains, in a really fun informal way, how random walks on graphs, are related to electrical circuits made of resistors. To dig deeper into the continuum case, try: