The Azimuth Project
Experiments in longevity vs resource distribution

Experiments in longevity vs resource distribution


This page contains a silly model.


Current populationP t
Initial populationP 0P 0>1
No timestepsTT>1
Variability of interactionsΔ0<Δ<1
Learning probabilityp λ0<p λ<1
Learning incrementλ0<λ<1
Population growthγ0<γ<1
Fortune split rateϕ0<ϕ<1

Changing share of resources

For each index i in 1 to P t:

  • Uniformly randomly pick another index j to engage in interaction. If ij compute

    (1)w i=ρ i(1λ κ i)U(1Δ,1+Δ)andw j=ρ j(1λ κ j)U(1Δ,1+Δ)w_i=\rho_i (1-\lambda^{\kappa_i}) U(1-\Delta,1+\Delta) \quad and \quad w_j=\rho_j (1-\lambda^{\kappa_j}) U(1-\Delta,1+\Delta)

    where U(a,b) is a uniformly distributed random variable.

  • Compute

    (2)R=ρ i+ρ jw i+w jR=\frac{\rho_i+\rho_j}{w_i+w_j}

    and set the updated ρ values

    (3)ρ i=w iRandρ j=w jR\rho_i'=w_i R \quad and \quad \rho_j'=w_j R
  • Possibly increase the participants’ knowledge through learning

    (4)κ i=κ i+Bernoulli(p λ)andκ j=κ j+Bernoulli(p λ)\kappa_i'=\kappa_i + Bernoulli(p_{\lambda}) \quad and \quad \kappa_j'=\kappa_j +Bernoulli(p_{\lambda})

    Note that the possible knowledge increment is taken to be independent of the change in share of resources as it is generally possible to learn, and equally importantly not to learn, from either success or failure. (There is an argument about whether the amount learned, in the event something is learned, ought to scale in some way with the amount already known. For simplicity we start with the additive term here.)

Generation of offspring

At each time t, after the changing of resources and gaining of experience calculation, randomly select γP t individuals as “parents”. From each parent

  • Create a new descendant (with index k) with knowledge of κ k=1.

  • Split the resources of each parent into ρ i=(1ϕ)ρ i kept by the parent and a resource value of ρ k=ϕρ i is used to initialise the child.

Experimental observations