Continued from Part 3.
One application of this machinery has been to understand how evolution can produce altruistic behaviour. A behaviouristic definition of “altruism” would go something like, “Acting to increase the reproductive success of another individual at the expense of one’s own”. This can be written out using game theory; one defines a parameter to stand for benefit, to stand for cost and a payoff function which depends on , and the strategy employed by an organism.
We shall consider a lattice of sites, each one of which can be in one of three states: empty, denoted by 0; occupied by a selfish organism, denoted by ; and occupied by an altruist, denoted by .
In the reproduction process, an organism spawns an offspring into an adjacent empty lattice site. This turns a pair of type into a pair of type . At what rate should the transition occur? If we presume some baseline reproductive rate, call it , then the presence of altruistic neighbours should augment that rate. We’ll say that if the number of nearby altruists is , then selfish individuals will reproduce at a rate , where the parameter specifies how helpful altruists are. The reproduction process for altruists, which we can write , occurs at a rate . Here, the parameter is the cost of altruism: it’s how much an altruist gives up to help others.
In the differential equation for , the transition contributes a term proportional to the density of pairs:
where we have written for , to save a little ink. All things told, the rate of change of is given by
Yuck! After writing a few equations like that, it’s easy to wonder if maybe we should look for new mathematical ideas which could help us better organise our thinking.
The next steps follow the general plan we laid out above. We write differential equations for the pairwise probabilities , which depend on triplet quantities . Then, we impose a pair approximation, declaring that , which gives us a closed system of equations. Next, we find the fixed point with , and we perturb around that fixed point to see what happens when a strain of altruists is introduced into a selfish population. The dominant eigenvalue of the time-evolution matrix tells us, in this approximation, whether the altruistic strain will invade the lattice or wither away. The condition can be written in the form
Here, we’ve written for the conditional probability of altruists contacting altruists which obtains as the local densities equilibrate. That is to say, an attempted invasion by altruists will succeed if a measure of benefit, , multiplied by an indicator of “assortment” among genetically similar individuals, is greater than the cost of altruistic behaviour, .
After all our mucking with eigenvalues, we have found a condition which is strongly reminiscent of a classic and influential idea from mid-twentieth-century evolutionary theory. In biology, the inequality
is known as Hamilton’s rule (Van Dyken et al., 2011). This is a rule-of-thumb for when natural selection can favour altruistic behaviour: altruists can prosper when the inequality is satisfied. Hamilton’s rule was originally derived for unstructured populations, with no network topology or spatial arrangement to them. We can understand Hamiltion’s rule in this context in the following way:
How well an organism fares in the great contest of life depends on the environment it experiences. During the course of its life, an individual member of a species will interact with a set of others, which we could call its “social circle”. The composition of that social circle affects how well an individual will propagate its genetic information to the next generation — its fitness. In an unstructured population, we can think of such circles being formed by taking random samples of the population. An altruist, by our definition, sacrifices some of its own potential so that offspring of other individuals can prosper. A social circle of altruists can fare better than a social circle of selfish individuals, increasing the chances that social circles which form in the next generation will contain altruists (Van Dyken et al., 2011).
It’s common to treat “benefit” and “cost” as parameters of the system. We could potentially derive them from more fundamental dynamics, if we looked more closely at the interactions within a particular ecosystem, but right now, they’re just knobs we can turn. What about the remaining quantity in Hamilton’s rule: what does “relatedness” mean? Excellent question! We can get a feel for where the term came from by taking a gene’s-eye view: copies of many of my particular genetic variants will be sitting inside the cells of my close relatives. Consequently, as far as my genes are concerned, if my relatives survive, that’s almost as good as my surviving. When reckoning the benefit of altruism against its cost, then, the aid one organism brings to another ought to be weighted by how “related” they are.
So, we can say that we have “recovered Hamilton’s rule as an emergent property of the spatial dynamics” — if we are willing to draw a circle around the middle of our formula and declare those terms to be the “relatedness”.
Knowing where our invasion condition came from, we can appreciate some of the caveats which scientists have raised in connection with Hamilton’s rule.
In particular, is often taken to be the average relatedness of interacting individuals, as compared to the average relatedness in the population, in which case inequality (1) is referred to as Hamilton’s rule. It is important to note that inequality (1) is only a description of whether the current level of assortment as subsumed in the parameter is sufficient to favour cooperation, but not a description of the mechanisms that would lead to such assortment. It has been suggested repeatedly that the problem of cooperation can be understood entirely based on Hamilton’s rules of the form (1). Even though often taken as gospel, this claim is wrong in general, for two reasons.
First, and foremost, even if a rule of the form (1) predicts the direction of selection for cooperation at a given point in time, the long-term evolution of cooperation cannot be understood without having a dynamic equation for the quantity , i.e., without understanding the temporal dynamics of assortment. The dynamics of in turn cannot be understood based solely on the current level of cooperation, and hence expressions of type (1) are in general insufficient to describe the evolutionary dynamics of cooperation. Second, the quantity , which measures the average relatedness among interacting individuals, is insufficient to construct Hamilton’s rule in models that account for variable individual-level death rates and/or group-level events.
Contrary to the popular use of the word, “relatedness” describes a population of interacting individuals, where refers to how assorted similar individuals are in the population.
every definition of relatedness must take into account the population. Therefore, relatedness is not the percent of genome shared, genetic distance, or any extent of similarity between two isolated individuals in a larger population. Also, because horizontal gene transfer is commonplace between microbes and selection is strong, phylogenetic distance or any other indirect genetic measure is likely to be inaccurate. Many of these false definitions live on partly because ambiguous heuristics like “ for brothers, for cousins,” which require very specific assumptions, are repeated in the primary literature. Also, most non-theoretical papers simply define relatedness as “a measure of genetic similarity” and do not elaborate or instead leave the precise definition to the supplemental information Unfortunately, scientists can easily misinterpret this “measure of genetic similarity” to be anything that is empirically convenient such as genetic distance or percent of genome shared. Largely because of this confusion, we support the more widespread use of the term “assortment,” which is harder to misinterpret For similar reasons of reader understanding, we also encourage authors to make calculations more explicit, either in the main or supplemental text, and to avoid repeating previous results without giving the assumptions that went into deriving them.
It is for this reason that we called a measure of “assortment” earlier. Of course, even with this careful choice of terminology, the limitations of our Hamilton-esque rule still apply: we know that because we derived it from the condition that the dominant eigenvalue be positive, it will miss any effects which a fixed-point eigenvalue analysis is not sensitive to.
Stepping back for a moment, notice that although the terms and coefficients started to proliferate on us, we haven’t introduced any remarkably “advanced” or “esoteric” mathematics. Derivatives, matrices, eigenvalues — this is undergraduate stuff! The amount of algebra we’ve been able to stir up without really even trying is, however, a little worrying. We can invent a mathematical model for some particular biological scenario, and we might even be able to solve it, or at least tell how it’ll behave in certain interesting circumstances. But what if we want general results which extend across models, or ideas which will help us identify the common features and the key disparities among a host of examples?
With that attitude, then, a thought towards “higher” mathematics:
A Petri net specifies a symmetric monoidal category (Lerman et al. 2011). Each truncation of the moment-dynamics hierarchy for a system yields a Petri net, and so successive truncations of the moment-dynamics hierarchy yield mappings between categories. Going from a pair approximation to a mean-field approximation, for example, transforms a Petri net whose circles are labelled with pair states to one labelled by site states. Category theory might be able to say something interesting here. Anything which can tame the horrible spew of equations which arises in these problems would be great to have. Ought we be considering, say, the strict 2-category whose objects are moment-closure approximations to an ecosystem, and whose morphisms are symmetric monoidal functors between them?