2017-12-21 --- ThresholdSpender Evolution


  1. Any optimal "ThresholdSpender" strategy threshold is dependent on the number of validators in the population. It also likely depends on  and the amount of PC attached to the genesis block.
  2. A locally optimal threshold seems to be ~55, at least for a population of similar validators of size ~10.
  3. A group of such validators does not efficient come to consensus; there are many long "dead" branches of the DAG.
  4. A group of such validators together with one of a much larger threshold (used 300 here) is much better at having a tight DAG with few offshoot branches.
  5. The extra validator outperforms the core population.


  1. For a population of size 10 the answer seems to be 55, yet for a population of size 3 it is impossible to reach that threshold, so all validators earn zero REV. Therefore 55 if not the optimum for 3 validators. Obviously the amount of PC a validator can earn is also a function of and the amount of PC attached to the genesis block (and the strategies of the other validators for that matter), so these quantities must also impact the optimum. Note all simulations in this entry are being run with  and genesis PCA = 10.
  2. Consider the following two tables. Both show the mean and standard deviation of the thresholds in the population of validators for each generation. Note: a generation gives each validator 100 opportunities to create a block and the next generation is created from the current one using a fitness function equal to the REV earned by the validator after those 100 rounds. The first table shows the mean increasing roughly linearly, while the standard deviation remains constant. The second table shows both quantities holding stable, with the mean around 55. Note that this is separated into two tables because I did not want to spend the computational time to evolve the system all the way up to the optimum when I saw that I must be far away in the first table, so I jumped the initial population mean up and re-started the simulation. Also note that even though the initial population size is 10, the population grew a little over time (~1 validator every other generation because it is not fixed by the simulation, but perhaps it should be...). Since the size of the population is a factor in determining the optimum, it was a bit of a moving target, but since the population didn't move that much, the result probably still holds. Maybe ~50 would be a more accurate statement in terms of significant figures.
    1. thresholdDist_linearGrowth.csv
    2. thresholdDist_stable.csv
  3. Consider the following visualization of running a population of 10 validators with thresholds 55 +/- 1. The simulation runs for 100 rounds, as usual. We see that many long, dead branches are produces.
    1. blockDAG_10at55.pdf
  4. Consider the following visualization from running a population with the same specs as the previous, plus one extra validator with threshold 300. With the extra validator the chain has less branching. Notice that after the initial rush of all the validators building up to their thresholds, there is a rush of branching before the big validator starts dropping regular blocks and then chain is more constrained. Specifically, it is constrained around those heavily weighted blocks and you can see some of the branching produced by the core population around them.
    1. blockDAG_10at55with1at300.pdf
  5. The following tables show the PC balances and REV balances for each validator during each round. Validator_0000 is the one with 300 as its PC spend threshold, while the remaining ones are the core population with thresholds around 55. We see the strongly periodic nature of validator_0000's PC balance corresponding to the regular blocks seen in the visualization. We also see validator_0000 run away with all the REV earnings after about round 20.
    1. PCFlow_10at55with1at300.csv
    2. RevFlow_10at55with1at300.csv

Conclusions, Speculations and Future Work

  • The fact that the 300 threshold does so well against the others seems to suggest that 55 is not an evolutionarily stable optimum. Doing an "invasion" type simulation could be interesting. In practice I wonder if this phenomenon will greatly increase the time to reach consensus because everyone will try to be the "big" validator and hoard their PC for a long time. That situation is probably invadable by a validator with a low threshold though, so maybe no optimum is stable and will swing between the two extremes. Or maybe the equilibrium is a mixed state of different thresholds (that actually sounds ideal!).
  • The serious branching seen in the pure 55 simulation could possibly be reduced by adding additional rules to the protocol which cause the REV earned to decay the farther the proposed block is from the "last finalized block." Of course, it is not entirely clear what this means since finality is subjective, so there is still some work to be done around that idea.