The Azimuth Project
Supersymmetry and the Fokker-Planck equation

This page is a blog article in progress, written by Blake Stacey. Unlike my other scratch notes, it was not superceded by my thesis.


We provide a pedagogical introduction to Fokker--Planck diffusion equations with spatiotemporally varying drift and diffusion coefficients. The treatment of Fokker–Planck equations with changes of variable is reviewed, followed by the transformation of diffusion equations into Schrödinger-like form, the application of supersymmetric quantum mechanics (SUSY QM) and the use of Floquet theory. The effective diffusion coefficient is defined in terms of first-passage times, and we argue that exchanging a potential for its SUSY partner should leave this coefficient unchanged, even in the case of periodic forcing.

Introduction: Biological Applications

We investigate solutions of the Fokker–Planck diffusion equation with spatiotemporally varying drift and diffusion coefficients,

tp(x,t)= x(F(x,t)p(x,t))+ x 2(D(x,t)p(x,t)). \partial_t p(x,t) = -\partial_x \left(F(x,t) p(x,t)\right) + \partial_x^2 \left(D(x,t) p(x,t)\right).

This equation is of biological significance in that it can describe, for example, the time evolution of a gene pool. Given a diploid population of NN individuals, we let xx stand for the frequency of one allelic form of a gene; the diffusion coefficient captures the effect of random mating,

D(x,t)=D(x)=x(1x)4N, D(x,t) = D(x) = \frac{x (1 - x)}{4N},

while the force field F(x,t)F(x,t) incorporates both mutation and selection pressure:

F(x,t)=μ(12x)+f(t)x(1x). F(x,t) = \mu(1-2x) + f(t)x(1-x).

Here, selection is implemented via the differences in Malthusian fitnesses f(t)=f 2(t)f 1(t)f(t) = f_2(t) - f_1(t), which we consider to vary on a timescale longer than a generation but shorter than the experiment.

The Fokker–Planck form has also been used to describe the stochastic dynamics of neural activity. Brunel uses the form

τ tp(V,t)= V[(μ(t)V)p(V,t)]+σ 2(t)2 V 2p(V,t), \tau \partial_t p(V,t) = -\partial_V\left[(\mu(t) - V)p(V,t)\right] + \frac{\sigma^2(t)}{2} \partial_V^2 p(V,t),

in which VV stands for the membrane depolarization of a randomly chosen neuron, σ 2(t)\sigma^2(t) is the fluctuation in excitatory and inhibitory inputs, and μ(t)\mu(t) is the average input signal.

Changes of Variable

The Fokker–Planck Equation can be put into more convenient form through changes of variable.

Following section 5.1 of Risken,

y= 0 xDD(ξ)dξ. y = \int_0^x \sqrt{\frac{D}{D(\xi)}} d\xi.

For the particular case of population genetics with random mating,

y=4ND 0 xdξξ(1ξ). y = \sqrt{4N D}\int_0^x\frac{d\xi}{\sqrt{\xi(1-\xi)}}.

Using the standard formula

dxax 2+bx+c=1aarcsin(2ax+bb 24ac), \int\frac{dx}{\sqrt{a x^2 + b x + c}} = -\frac{1}{\sqrt{-a}} \arcsin \left(\frac{2 a x + b}{\sqrt{b^2 - 4 a c}}\right),

this can be reduced to

y4ND=π2arcsin(12x)θ. \frac{y}{\sqrt{4N D}} = \frac{\pi}{2} - \arcsin(1 - 2x) \equiv \theta.

The force term is also transformed:

G(y,t)=DD(x)[F(x,t)12 xD(x)]. G(y,t) = \sqrt{\frac{D}{D(x)}} \left[F(x,t) - \frac{1}{2}\partial_x D(x)\right].

Substituting in our expressions for D(x)D(x) and for the force field F(x,t)F(x,t) gives

G(y,t)=4NDx(1x)[(12x)(μ18N)+f(t)x(1x)]. G(y,t) = \sqrt{\frac{4N D}{x(1-x)}} \left[(1-2x)\left(\mu - \frac{1}{8N}\right) + f(t)x(1-x)\right].

We eliminate xx in favor of yy or, more conveniently, the normalized variable θ\theta. Noting that

12x=cosθ 1 - 2x = \cos\theta


x(1x)=sin 2θ4, x(1-x) = \frac{\sin^2\theta}{4},

we can after some algebra show that

G(θ,t)=4ND[(μ18N)tanθ+f(t)4sinθ]. G(\theta,t) = 4\sqrt{N D} \left[\left(\mu - \frac{1}{8N}\right)\tan\theta + \frac{f(t)}{4} \sin\theta\right].

We can avail ourselves of the mathematical tools developed in quantum mechanics if we manipulate the Fokker–Planck equation into the form of a Schrödinger equation. This is a special case of a general trick employed in the study of differential equations, by which one studies non-self-adjoint differential operators by transmogrifying them into self-adjoint ones. See chapter 18 of Gbur’s textbook. To see how this is done, we start with a Fokker–Planck equation written operatorially as

tp(x,t)= 0p(x,t), \partial_t p(x,t) = \mathcal{L}_0 p(x,t),

where 0\mathcal{L}_0 has a drift term linear in the derivative x\partial_x and a diffusion term quadratic in x\partial_x:

0=D x 2 xh(x). \mathcal{L}_0 = D\partial_x^2 -\partial_x h(x).

The operator 0\mathcal{L}_0 is not Hermitian, since taking the conjugate changes the sign of the derivative x\partial_x. (Recall from basic quantum mechanics that the momentum operator in the position basis is i x-i\partial_x, and that momentum, being an observable quantity, needs a Hermitian operator.) To make our Fokker–Planck time evolution look more like a Schrödinger equation, we shall try to invent a Hermitian analogue of 0\mathcal{L}_0. The trick we employ is to define

ϕ 0(x)exp(12D 0 xh(y)dy), \phi_0(x) \equiv \exp\left(\frac{1}{2D} \int_0^x h(y) d y\right),

and to rewrite the probability distribution p(x,t)p(x,t) as

p(x,t)=ϕ 0(x)ψ(x,t). p(x,t) = \phi_0(x) \psi(x,t).

As we shall soon see, the function ψ(x,t)\psi(x,t) plays the role of the wavefunction in the Schrödinger-like version of our time-evolution equation. Because the Fokker–Planck operator involves first and second derivatives with respect to xx, we note that

xϕ 0(x)=h2Dϕ 0(x) \partial_x \phi_0(x) = \frac{h}{2D}\phi_0(x)

and thus

x 2ϕ 0(x)=h2Dϕ 0(x)+h 24D 2ϕ 0(x). \partial_x^2 \phi_0(x) = \frac{h'}{2D}\phi_0(x) + \frac{h^2}{4D^2} \phi_0(x).

The left-hand side of our Fokker–Planck equation is, using our new “wavefunction”,

tp(x,t)=ϕ 0(x) tψ(x,t). \partial_t p(x,t) = \phi_0(x) \partial_t \psi(x,t).

If we move ϕ 0(x)\phi_0(x) to the other side of the equation, we get that

tψ(x,t)=ϕ 0 1 0ϕ 0ψ(x,t). \partial_t \psi(x,t) = \phi_0^{-1}\mathcal{L}_0\phi_0\psi(x,t).

Working out 0ϕ 0ψ\mathcal{L}_0\phi_0\psi yields several terms, all of which contain a factor ϕ 0\phi_0, since every derivative of ϕ 0\phi_0 yields a ϕ 0\phi_0 again. Therefore, when we divide by ϕ 0\phi_0 we find

ϕ 0 1 0ϕ 0ψ=D x 2ψ+(h2h 24D)ψ. \phi_0^{-1}\mathcal{L}_0\phi_0\psi = D\partial_x^2\psi + \left(-\frac{h'}{2} - \frac{h^2}{4D}\right)\psi.

Both operators acting on ψ\psi are Hermitian, the first because it is quadratic in the derivative and the second because it contains only real functions of xx.

Consequently, we define the new operator

ϕ 0 1 0ϕ 0, \mathcal{H} \equiv \phi_0^{-1}\mathcal{L}_0\phi_0,

and we rewrite our equation in the form

tψ(x,t)=ψ(x,t). \partial_t \psi(x,t) = \mathcal{H}\psi(x,t).

This is the Schrödinger-like formulation of the Fokker–Planck equation.

It is interesting to note what happens when the diffusion coefficient is not constant, i.e., when D(x)D(x) is a non-constant function of xx. If we define

ϕ 0 *(x)exp( 0 xh(y)2D(y)dy), \phi_0^*(x) \equiv \exp\left(\int_0^x\frac{h(y)}{2D(y)} dy\right),

then a series of manipulations following the steps above leads to the result

*=D x 2+2D x+(h2h 24D)+Dh2D, \mathcal{H}^* = D\partial_x^2 + 2D'\partial_x + \left(-\frac{h'}{2} - \frac{h^2}{4D}\right) + \frac{D'h}{2D},

which clearly reduces to the \mathcal{H} given above when D(x)=DD(x) = D. Note that *\mathcal{H}^* contains a term linear in x\partial_x, and thus is not Hermitian.


Cooper, Klein and Sukhatme (1995) give a good general introduction to supersymmetry methods in quantum mechanics. Basically, it’s the solution to the simple harmonic oscillator using creation and annihilation operators, but all grown up. We start with a Hamiltonian 1\mathcal{H}_1, which we say we can factor into an operator and its adjoint:

1=A A, \mathcal{H}_1 = A^\dagger A,

By exchanging the order of these factors, we obtain a new Hamiltonian, which we call the partner of the first:

2=AA . \mathcal{H}_2 = A A^\dagger.

A typical Hamiltonian in one-dimensional quantum mechanics has a momentum term which in the position basis becomes a second derivative and a position-dependent term indicating the potential energy of the system. Scaling to discard unnecessary units, we can choose the operator AA to take the form

A= x+W(x), A = -\partial_x + W(x),

where W(x)W(x) is known as the superpotential. As indicated above, taking the adjoint changes the sign on the spatial derivative:

A = x+W(x). A^\dagger = \partial_x + W(x).

In terms of the superpotential,

1= x 2W(x)+W 2(x). \mathcal{H}_1 = -\partial_x^2 - W'(x) + W^2(x).

Exchanging the order of the operators yields

2= x 2+W(x)+W 2(x). \mathcal{H}_2 = -\partial_x^2 + W'(x) + W^2(x).

The Hamiltonians 1\mathcal{H}_1 and 2\mathcal{H}_2 are isospectral, as one can easily demonstrate. SUSY plus the extra condition known as shape invariance allows one to solve for eigenfunctions and eigenvalues using operator methods, like one does for the harmonic oscillator.

The Schr"odinger form of the Fokker–Planck equation, exhibits a supersymmetry of this form, between the Hamiltonian with force coefficient F(x)F(x) and its partner with force coefficient F(x)-F(x). We can factor the Fokker–Planck Hamiltonian given earlier using the two operators

a=D x12DF, a = \sqrt{D}\partial_x - \frac{1}{2\sqrt{D}} F,

and its adjoint

a =D x12DF. a^\dagger = -\sqrt{D}\partial_x - \frac{1}{2\sqrt{D}} F.

With this choice of constant prefactors,

=a a. \mathcal{H} = -a^\dagger a.

The constants here were chosen for compatibility with Jung and Hänggi.

If we use the same operators aa and a a^\dagger but with DD a function of xx, then the non-Hermitian operator *\mathcal{H}^* defined above can be written

*=a a+3D4 x+DF4D. \mathcal{H}^* = -a^\dagger a + \frac{3D}{4}'\partial_x + \frac{D' F}{4D}.

The derivative can be expressed in terms of the SUSY operators as

*=a a(3D4D)a +58DFD. \mathcal{H}^* = -a^\dagger a - \left(\frac{3D'}{4\sqrt{D}}\right)a^\dagger + \frac{5}{8} \frac{D'F}{D}.

Floquet Theory and Time Dependence

We wish to understand what happens when the influences on a diffusion process vary periodically over time. A particle trapped in a potential well may be subjected to an extra oscillating field; a population of microbes in a dish might be dividing once every half-hour in a laboratory which is dark for twelve hours out of every twenty-four. Differential equations with periodic driving forces are treated with Floquet theory. One way to apply Floquet theory to our diffusion equations is to rewrite a problem involving diffusion along a one-dimensional line as one of diffusion in a 2D space, with a new coordinate θ\theta taking over the role of the time tt.

Following Jung and Hänggi, we write

tp= x[h(x)+Af(Ωt)]p+D x 2p, \partial_t p = -\partial_x [h(x) + A f(\Omega t)]p + D\partial_x^2 p,

and transform it into the 2D process

tW(x,θ;t)=( 0Af(θ) xΩ θ)W(x,θ;t), \partial_t W(x,\theta;t) = (\mathcal{L}_0 - A f(\theta) \partial_x - \Omega \partial_\theta) W(x,\theta;t),

in which the operator 0\mathcal{L}_0 is

0= xh+D x 2, \mathcal{L}_0 = -\partial_x h + D\partial_x^2,

capturing the tao of the Fokker–Planck. If we set the right-hand side of the Fokker–Planck equation to zero, we find a differential equation for the stationary probability density W st(x,θ)W_{st}(x,\theta):

( 0f(θ) xΩ θ)W st(x,θ)=0. (\mathcal{L}_0 - f(\theta)\partial_x - \Omega\partial_\theta) W_{st}(x,\theta) = 0.

The probability distribution in the 1D view of the problem, p(x,t)p(x,t), does not tend to a steady-state solution, but its asymptotic behavior is just given by W st(x,θ)W_{st}(x,\theta).

p as(x,t)=2πW st(x,θ), p_{as}(x,t) = 2\pi W_{st}(x,\theta),


t=θΩ. t = \frac{\theta}{\Omega}.

The factor of 2π2\pi arises because we have compactified the θ\theta coordinate in order to represent the temporally periodic influence f(t)f(t).

If we soup up the 1D process just a bit,

tp= x[h(x)+g(x)f(Ωt)]p+D x 2p, \partial_t p = -\partial_x [h(x) + g(x) f(\Omega t)] p + D\partial_x^2 p,

then the derivative with respect to xx gives us one more term:

tW=[ 0gf(θ)gf(θ) xΩ θ]W, \partial_t W = [\mathcal{L}_0 - g' f(\theta) - g f(\theta)\partial_x - \Omega\partial_\theta]W,

where we might as well define the expression in brackets to be a new operator \mathcal{L}.

Echoing the formalism developed in the section above, we define the operator

a=D x12Dh, a = \sqrt{D}\partial_x - \frac{1}{2\sqrt{D}} h,

whose conjugate is

a =D x12Dh. a^\dagger = -\sqrt{D}\partial_x - \frac{1}{2\sqrt{D}} h.

The product of these operators acting on a test function ψ\psi is

a aψ=D x 2ψ+(h2+h 24D)ψ. a^\dagger a \psi = -D \partial_x^2 \psi + \left(\frac{h'}{2} + \frac{h^2}{4D}\right)\psi.

Exchanging the order of a a^\dagger and aa merely changes the sign on the hh' term, allowing us to deduce that

[a,a ]=h. [a, a^\dagger] = -h'.

If we attempt to construct a Hermitian version of \mathcal{L},

ϕ 0 1ϕ 0, \mathcal{H} \equiv \phi_0^{-1}\mathcal{L}\phi_0,


ϕ 0(x)=exp(12D 0 xh(y)dy), \phi_0(x) = \exp\left(\frac{1}{2D} \int_0^x h(y) d y \right),

we find that

=a afg xΩ θfgfgh2D. \mathcal{H} = -a^\dagger a - f g\partial_x -\Omega\partial_\theta - f g' - \frac{f g h}{2D}.

The terms involving ff and gg can be combined by noting that a a^\dagger contains a derivative with respect to xx.

=a a+fa DgΩ θ. \mathcal{H} = -a^\dagger a + \frac{f a^\dagger}{\sqrt{D}}g - \Omega\partial_\theta.

If the force h(x)h(x) is exchanged with h˜(x)h(x)\tilde{h}(x) \equiv -h(x), and these steps are repeated, one deduces that

˜=aa faDgΩ θ. \tilde{\mathcal{H}} = -a a^\dagger - \frac{f a}{\sqrt{D}}g - \Omega\partial_\theta.

In SUSY QM, the operators aa and a a^\dagger map eigenstates of one Hamiltonian into eigenstates of its partner. We consider, therefore, products of these operators with our Hamiltonians. By straightforward algebra,

a =aa a+afDag+aΩ θ, a\mathcal{H}^\dagger = -a a^\dagger a + \frac{a f}{\sqrt{D}} a g + a\Omega\partial_\theta,


˜a=aa a+faDgaaΩ θ. \tilde{\mathcal{H}} a = -a a^\dagger a + \frac{f a}{\sqrt{D}}g a - a\Omega\partial_\theta.

Using 𝔗\mathfrak{T} to denote the operation of θ\theta reversal,

a𝔗 =aa a+(𝔗f)DagaΩ θ. a\mathfrak{T}\mathcal{H}^\dagger = -a a^\dagger a + \frac{(\mathfrak{T}f)}{\sqrt{D}} a g - a\Omega\partial_\theta.

Subtracting the previous two equations gives

a(𝔗 )˜a=aD[(𝔗f)agfga]. a(\mathfrak{T}\mathcal{H}^\dagger) - \tilde{\mathcal{H}}a = \frac{a}{\sqrt{D}}\left[(\mathfrak{T}f)a g - f g a\right].

Because f(θ)f(\theta) is periodic, we might as well call it even.

a(𝔗 )˜a=afD[a,g]. a(\mathfrak{T}\mathcal{H}^\dagger) - \tilde{\mathcal{H}}a = \frac{a f}{\sqrt{D}} [a, g].

If [a,g]=0[a, g] = 0, then we recover the case studied by Jung and Hänggi, in which a SUSY exists between \mathcal{H}^\dagger and ˜\tilde{\mathcal{H}}. To illustrate, consider the eigenfunctions of \mathcal{H} and \mathcal{H}^\dagger, defined by

Σ(x,θ)=λΣ(x,θ) \mathcal{H}\Sigma(x,\theta) = \lambda\Sigma(x,\theta)


Λ(x,θ)=λΛ(x,θ) \mathcal{H}^\dagger\Lambda(x,\theta) = \lambda\Lambda(x,\theta)

Left-multiplying the previous equation by aa and acting with 𝔗\mathfrak{T} shows that

a(𝔗 )Λ(x,θ)=λaΛ(x,θ), a(\mathfrak{T}\mathcal{H}^\dagger)\Lambda(x,-\theta) = \lambda a\Lambda(x,-\theta),

and making the substitution a(𝔗 )=˜aa(\mathfrak{T}\mathcal{H}^\dagger) = \tilde{\mathcal{H}}a, we can write,

˜aΛ(x,θ)=λaΛ(x,θ). \tilde{\mathcal{H}}a \Lambda(x,-\theta) = \lambda a\Lambda(x,-\theta).

The operator aa maps eigenstates of 𝔗 \mathfrak{T}\mathcal{H}^\dagger into those of ˜\tilde{\mathcal{H}}.

To understand the effect of the function g(x)g(x) not being constant, we need to understand the commutator [a,g][a, g]. Because [g,h]=0[g, h] = 0, we only need to consider the term in aa containing the derivative x\partial_x. That is,

[a,g]=D[ x,g]. [a, g] = \sqrt{D}[\partial_x, g].

When we let this operator act on a wavefunction ψ\psi, the surviving term is proportional to gψg'\psi. We get that

a(𝔗 Dfg)˜a=0. a(\mathfrak{T}\mathcal{H}^\dagger - \sqrt{D}f g') - \tilde{\mathcal{H}}a = 0.

Thus, it appears that we can employ SUSY for the Brunel description of neural dynamics, at least for the special case of constant variation σ 2\sigma^2, but not for population genetics with periodically varying fitness, at least not easily.

Comparison to Simulation

To understand the long-term behavior of solutions of the Fokker–Planck equation, we employ measures which summarize salient features of the probability distribution p(x,t)p(x,t). For example, we are curious to know how much p(x,t)p(x,t) will have spread out when tt has grown rather large. A natural choice is to examine the variance, x 2(t) c=x 2(t)x(t) 2 \langle x^2(t) \rangle_c = \langle x^2(t) \rangle - \langle x(t)\rangle^2 , scaled by the elapsed time:

D efflim tx 2(t)x(t) 22t. D_{eff} \equiv \lim_{t \rightarrow\infty} \frac{\langle x^2(t)\rangle - \langle x(t)\rangle^2} {2t}.

This is the effective diffusion coefficient defined by Riemann et al. and used by Sahoo et al. in the case of periodic partner potentials. It can be shown analytically that the effective diffusion coefficients for such partner potentials ±V(x)\pm V(x) are in fact equal. Furthermore, Sahoo et al. find through numerical simulation that the effective diffusion coefficients are also equal in the case of periodic forcing.

Let t(x 0b)t(x_0\rightarrow b) denote the tiime required to reach bb from x 0x_0. Then

T n(x 0b)t n(x 0b). T_n(x_0\rightarrow b) \equiv \langle t^n(x_0\rightarrow b) \rangle.

For the case of a particle in a tilted periodic potential with

V(x)V 0(x)xF V(x) \equiv V_0(x) - x F


V 0(x)=V 0(x+L), V_0(x) = V_0(x + L),

the moments T nT_n satisfy the recursion relation

T n(x 0b)=nD 0 x 0 bdxexp(V(x)k BT) xdyexp(V(y)k BT)T n1(x 0b), T_n(x_0\rightarrow b) = \frac{n}{D_0} \int_{x_0}^b dx\, \exp\left(\frac{V(x)}{k_B T}\right) \int_{-\infty}^x dy\, \exp\left(-\frac{V(y)}{k_B T}\right) T_{n-1}(x_0\rightarrow b),

where T 0(x 0b)T_0(x_0\rightarrow b) is defined to equal unity. The effective diffusion D effD_{eff} can be derived in terms of the “first passage time dispersion”

ΔT 2(x 0b)t 2(x 0b)t(x 0b) 2, \Delta T_2(x_0\rightarrow b) \equiv \langle t^2(x_0\rightarrow b) \rangle - \langle t(x_0\rightarrow b)\rangle^2,

which is just the second cumulant, of course.

The units of D effD_{eff} are length squared over time. We have one obvious quantity having units of length, namely the potential periodicity LL. The dispersion ΔT 2\Delta T_2 carries units of time squared, so because we expect D effD_{eff} to grow with increasing dispersion, we have to divide by a quantity with units of time cubed. This dimensional analysis is our heuristic justification for the result of Riemann et al.,

D eff=L 22ΔT 2(x 0x 0+L)T 1(x 0x 0+L), D_{eff} = \frac{L^2}{2} \frac{\Delta T_2(x_0 \rightarrow x_0 + L)} {T_1(x_0 \rightarrow x_0 + L)},

which they derive using a coarse-graining argument.

In physical units, the Fokker–Planck equation for this process is

tp(x,t)= xV(x)p(x,t)+k BT x 2p(x,t). \partial_t p(x,t) = \partial_x V'(x) p(x,t) + k_B T\partial_x^2 p(x,t).

Writing D 0D_0 for the bare diffusion constant k BT/γk_B T/\gamma, we define

I +(x)1D 0e V(x)Fxk BT xL xdyexp(V(y)+Fyk BT), I_+(x) \equiv \frac{1}{D_0} e^{\frac{V(x) - Fx}{k_BT}} \int_{x-L}^x dy\, \exp\left(\frac{-V(y) + Fy}{k_BT}\right),

and the complementary quantity

I (x)1D 0e V(x)Fxk BT x x+Ldyexp(V(y)Fyk BT). I_-(x) \equiv \frac{1}{D_0} e^{\frac{-V(x) - Fx}{k_BT}} \int_x^{x+L} dy\, \exp\left(\frac{V(y) - Fy}{k_BT}\right).

The effective diffusion is

D eff=D 0 x 0 x 0+LI ±(x)I +(x)I (x)dx[ x 0 x 0+LI ±(x)dx] 3. D_{eff} = D_0 \frac{\int_{x_0}^{x_0 + L} I_\pm(x) I_+(x) I_-(x)\, dx} {\left[\int_{x_0}^{x_0 + L} I_\pm(x)\, dx\right]^3}.

Here, the subscript ±\pm indicates that the index on that function may be chosen to be either ++ or -, and x 0x_0 is an arbitrary reference point of convenience.

The symmetry of the factors in this expression indicates that under the exchange V(x)V(x) V(x) \rightarrow -V(x), FFF \rightarrow -F the effective diffusion is unchanged.

Reichl and friends find that the mean first-passage time for a Brownian rotor is governed by the lowest Floquet quasi-eigenvalues of the rotor’s Fokker–Planck equation, and varies as the inverse of the lowest Floquet decay rate. That is to say, the Floquet procedure naturally provides quantities with the dimensions of time, and those quantities are the natural timescales for phenomena of interest. By analogy, we expect that the first-passage time in a periodic potential will behave in a similar manner; because the Floquet decay rates are the eigenvalues of the corresponding 2D Fokker–Planck problem, the SUSY partner of that problem will map to a 1D diffusion problem with the same Floquet decay rates. We expect, therefore, that Floquet timescales will be the same for SUSY partners. Furthermore, quantities derived from these timescales, such as the effective diffusion, should also be invariant under SUSY exchange.

Open Questions

Two questions call themselves to mind, both concerning generalizations of the diffusion problems treated above. First, there is the situation of population genetics with time-varying fitness, and second, we have the case of stochastic neural dynamics with time-varying inputs. In the former problem, we aim to understand the eigenvalues of the operator

pop= 0g(x)f(θ)g(x)f(θ) xΩ θ. \mathcal{L}_{pop} = \mathcal{L}_0 - g'(x)f(\theta) - g(x)f(\theta)\partial_x - \Omega\partial_\theta.

As deduced above, it’s the cross term g(x)f(θ) xg(x)f(\theta)\partial_x which makes this generalization tricky.

The second situation also involves a generalized time-evolution operator. Suppose that the external drive μ(t)\mu(t) is periodically varying, and that the fluctuation σ 2(t)\sigma^2(t) which provides the diffusion coefficient has a periodic variation on top of a constant:

σ 2(t)2τ=D+f 2(t). \frac{\sigma^2(t)}{2\tau} = D + f_2(t).

Then, we can rewrite the Fokker–Planck equation in the form tp(x,t)=p(x,t)\partial_t p(x,t) = \mathcal{L}p(x,t), and changing over to the 2D perspective, we can cook up the new time-evolution operator,

neuro= 0f 1(θ) x+f 2(θ) x 2Ω θ. \mathcal{L}_{neuro} = \mathcal{L}_0 - f_1(\theta)\partial_x + f_2(\theta) \partial_x^2 - \Omega\partial_\theta.

Again, we are concerned with the eigenvalues of this operator, which provide the natural timescales of interest for our problem, and in particular, we’d like to know if this operator is isospectral with a SUSY partner.

Finally, there is the question of shape invariance. Supposing that a SUSY exists between partner time-evolution operators, does this extra condition also hold, implying analytic solubility by algebraic methods?


category: blog