**This is a transcript of the discussion after the call. Feel free to continue the discussion in this topic.**

*Initial distribution*:

Q: How do you think or expect the results to change with respect to different initial distributions?

A: In general, the motif is to see a bigger difference between validators and achieve a uniform distribution of wealth.

*How others handle initial distribution*:

Q: Are we looking into the original initial distributions of other chains and modeling based on that to inform the proper way of setting up initial distributions? Also, is there any thought to changing the number of agents that may be more appropriate for starting a network, like number X vs. number Y in the experiments?

A: We started with 1024 due to computational constraints, and the results that have been shown represent a timeline over one year. What happens in between doesnâ€™t change, but this is important because of the state relativization algorithm as well. If we have a stationary condition and somehow half of the validators drop the network, there is a time period in which the state relativization algorithm has to adapt. During that time, you will see a discrepancy between the state supply and what it should be.

*Validators and committing to other blocks*:

Q: Can the validator commit to another block, essentially changing the previous block in its own chain?

A: They have the opportunity to propose a block and can also publish it in a way that benefits them the most. An attacker could potentially override the change to earn the fees for themselves.

*Total stake vs. inferred stake graph*:

Q: What causes the discrepancy between the total stake and inferred stake? Is it due to the algorithm used to infer the total stake?

A: The discrepancy is due to the algorithmâ€™s presence.

C: However, based on previous discussions, there is an age parameter involved, where the algorithm can be improved through a learning rate. There are different versions of these algorithms that are likely more accurate.

*Randomness in simulations, stability, and stochasticity*:

C: Letâ€™s take two systems as an example. In the first scenario, imagine you have two exact copies with the same randomness, like a thermal history. If both systems have the same initial conditions, they will be in the same state at any time. In the other scenario, you change exactly one parameter, i.e., the stake value of one node, and observe whether this creates a difference in the state. This acts as a stability analysis. Randomness differs, and by changing conditions, we can see how it affects the systemâ€”whether conditions remain stable or notâ€”similar to stochasticity approaches in the physics community.

C: We are utilizing stochasticity as well. The seeds in every slot, in every epoch, are the same, so no matter how often I run the simulation, the results are consistent. What was mentioned is already being used; if we change just one seed, we can observe how it affects the simulation.

C: Itâ€™s also worth considering that while the seeds might be the same, changing conditions could potentially lead to chaotic systems if the system grows large enough.

*Wealth concentration as an indicator of selfish behavior*:

Q: While observing wealth concentration, can we infer that selfish behavior has occurred (as wealth concentration could be an indicator)?

A: We would need to distinguish where the wealth originated (within the system or just purchased tokens, for example). We could attempt to do that through inverse analysis.

C: If there are suspicions, various factors can be examined, such as a spike in transactions. There is ongoing research and exploration into how other networks handle these types of problems.

*Leaders per slot*:

Q: Does your presentation assume a single leader per slot, or does it consider shortened forks where there could be multiple leaders in each slot?

A: No, this work assumes that each slot has only one leader, based on the provided paper. The chances of forks are low but increase with the number of validators. This needs to be considered, accounting for the shape of the distribution and the number of validators.