Uniformly randomly pick another index $j$ to engage in interaction. If $i\ne j$ compute

(1)$w_i=\rho_i (1-\lambda^{\kappa_i}) U(1-\Delta,1+\Delta) \quad and \quad w_j=\rho_j (1-\lambda^{\kappa_j}) U(1-\Delta,1+\Delta)$

where $U(a,b)$ is a uniformly distributed random variable.

Compute

(2)$R=\frac{\rho_i+\rho_j}{w_i+w_j}$

and set the updated $\rho$ values

(3)$\rho_i'=w_i R \quad and \quad \rho_j'=w_j R$

Possibly increase the participants’ knowledge through learning

(4)$\kappa_i'=\kappa_i + Bernoulli(p_{\lambda}) \quad and
\quad \kappa_j'=\kappa_j +Bernoulli(p_{\lambda})$

Note that the possible knowledge increment is taken to be independent of the change in share of resources as it is generally possible to learn, and equally importantly not to learn, from either success or failure. (There is an argument about whether the amount learned, in the event something is learned, ought to scale in some way with the amount already known. For simplicity we start with the additive term here.)

Generation of offspring

At each time $t$, after the changing of resources and gaining of experience calculation, randomly select $\lfloor\gamma P_t\rfloor$ individuals as “parents”. From each parent

Create a new descendant (with index $k$) with knowledge of $\kappa_k=1$.

Split the resources of each parent into $\rho_i'=(1-\phi)\rho_i$ kept by the parent and a resource value of $\rho_k=\phi\rho_i$ is used to initialise the child.