Blog - Petri net programming (part 6) (Rev #1, changes)

Showing changes from revision #0 to #1:
Added | ~~Removed~~ | ~~Chan~~ged

Now we take up the subject of how to modify our simulator to use the rate formulas. Although the explanation of the principle got somewhat involved, the modification to the code will be straightforward.

First, working from the ground up, we need to add a bit of structure to our Transition objectsL a rate constant, and a method to return the firing rate itself. [in a labelling]. As we said, this will return rateConstant * PROD Falling powers.

The other change is to the logic of a Petri net instance, which is to choose the next rule to fire, using relative probabilities determined by the firing rates returned by he Transitions.

Let’s now discuss the algorithm for firing the transitions. We want this to be realistic for low concentrations as well, which means that the stochastic nature should lead to different execution sequences every time. So we won’t just assume that the transitions fire at a smooth rate, as would be the case in a direct numerical approximation to the rate equation.

So, given a time interval delta-t, and firing rates Tr for each transition, the *expected* number of firings of T will be Tr * delta-t.

But in our actual simulator, we want the transitions to happen *one at a time*. At least because every time a transition fires, it changes the firing rates of all the transitions. So if the expected number of firings of T is three, we don’t want to just force three firings of T – in the extreme case, the artificiality of this would manifest in the fact that T might consume all of its its tokens after firing once, so that it *couldn’t* firing three times in that labelling.

In addition, the following issue is raised here: suppose U is expected to fire three times, and V is expected to fire five times. By what realistic mechanism would we interleave the three firings of U with the 5 firings of V?

Our solution to this is to choose delta-t small enough, so that the probability of any transition firing is “low.” Then we just stage a contest, in which we randomly choose a transition (with equal probability among all the transitions), and, if its expected number of firings is 0 <= p « 1, we flip a biased coin that only gives heads p percent of the time – if the transition gets heads, it fires, then the labelling gets updated, all the firing rates get re-evaluated, and a new contest begins.

[This way we could do empirical experiments, to do with the statistical variations in the runs of the simulator.]

How small should delta-t be for this to give a true approximation to ta real stochastic process with the given Petri net?

The crux of our approach here is that we are using the expected number of times e that a transition will fire, as the probability that is will fire in delta-t. Clearly, if e exceeds one this doesn’t work at all – we cannot have a probability of 300% that a transition will fire.

So to a first approximation, try the following. Let Mr(l) be the maximum of the transition firing rates of of the transitions in labelling l. The let delta-t(l) = 1/Mr(l).

In this time interval, the fastest firing transition T’ will have an expected number of firings

T’r * delta-t = Mr * 1/Mr = 1,

and all other transitions will have expected firing rates [counts] between 0 and 1.

So we *could* use these as probabilities, but it’s still not veridical, especially for the transitions with the highest firing rates. This is clear for the fastest transition T – using this method, T’ would fire with probability 1 during delta-t. In reality____ [naturally?] sometimes it won’t fire, sometimes it would fire twice, etc.

So let eps be a small number between 0 and 1, that will control how good our approximation is – and let delta-t = eps / Mr.

Then T’ will have Mr * delta-t = eps expected firing during delta-t – and all other expected firings will be less than eps.

It is for small eps that the substitution of probability for expected value becomes valid.

The tradeoff here is that making eps smaller will yield a more veridical simulation, but will increase the running time of the simulation. On the other hand, if it is *too* small, then numerical round issues on the machine will introduce distortions.

Mathematical aside. Why is it — [okay? legitimate?] to use expected values as probabilities, for small intervals of time delta-t?

Suppose that the expected number of firings is e. The full statement is that there is a probability distribution P(delta-t) that assigns a probability to each number k, P(delta-t)(k) = probability that there will be k firings during delta-t. e is the mean of this probability distribution.

The exact shape of P(delta-t) depends on the physical characteristics of the underlying process – which are not determined by the formal structure of the Petri net. Consider the difference in the shape of P(delta-t) that would occur, on the one hand if the firing of a transition *increased* the chance of ____ **, versus one in which it decreased the chance of our transition firing during the decay window.**

Under the assumption that the firings ___ independent of each other, P(delta-t) assumes the shape of a Poisson distribution. In this distribution, as delta-t –> INF, then P(delta-t) approaches a normal distribution.

But what can we say, in general, about the shape of P(delta-t), as delta-t approaches zero – which is what we are using for our simulation? By in general, I mean, regardless of the specific nature of the P(delta-t).

What if delta-t were zero? Then there would be a 100% change that # of firings = 0, and 0% change of any other value. So this is the curve towards which P(delta-t) must tend, as delta-t –> 0.

Now what if e is a very small positive number?

Then the distribution will look like this:

[(0,p), (1,q), with p just under 1, and q just over 0.]

The value p at 0 is slightly less than 1, the value q at 1 is a small positive number, and all the other values are *much* smaller than q. As EPS tends to zero, in relation to q, the other terms become negligible. So towards the limit, this becomes a valid approximation:

[(0,p), (1,q), p just under 1, q = 1 - p.]

where p + q = 1.

Now E(P(delta-t)) = p * 0 + q * 1 = q = probability that the transition fires once during delta-t = probability that it fires at all during delta-t.

E(P(delta-t)) = P(delta-t)(1) = SUM[k >= 1, INF] P(delta-t)(k).

Now, for our transitions, we have that E(# of firings) = Tr * delta-t = q.

So (under these assumptions) the number we compute, based on rate coefficients, etc., Tr * delta-t is the *probability* that T will fire during delta-t. DONE

The modification to our code, then follows as a matter of course from these considerations.

def FireOneRule(labelling):

`let M = this.transitions.Max(tr => tr.GetFiringRate(labelling)).`

Let delta-t = EPS / M.

while (true): let i = random(1, len(this.transitions)) let T = this.transitions[i] let q = T.rate(lab) * delta-t let rand = random(0,1) if rand <= q: T.fire(lab) return

Code changes.

All the changes can be made as an incremental development of our ___ [previous?] Petri net simulator.

Here is the new transition class:

Here is the Petri net class:

Output will show running sum of delta-t.

Sample runs.

What does our H20 simulation converge to?

Periodic behavior.

Bistable behavior.

Applications / exercises.