You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 6, 2025. It is now read-only.
In the last eth2.0 implementers call, we decided it would be worthwhile to run some timing analysis on processing blocks with real-world amounts of attestations.
It would be great to get results from at least one other client. I know not everyone has a working BLS aggregate implementation yet, but anyone that does should give this a try and report results.
Proposed Implementation
Assuming 10M eth deposited puts us at ~300k validators. With 64 slots, that is ~5000 validators per slot. With 1000 shards divided across the 64 slots, that is ~16 shards per slot.
If all of the validators coordinate and vote on the same crosslink and their attestations are aggregated and include in the next slot, then there will be 16 attestations of ~300 validators each per block. This is a good place to start.
We can then make this estimate a worse case by assuming the validators split their votes across 2, 3, 4, or even 5 different crosslink candidates. If all committees split their votes across 2 candidates, then there would be 32 attestations per block each with ~150 validators each.
EDIT
My estimates on number of committees and size of committees were a bit off in practice. When using BenchmarkParams { total_validators: 312500, cycle_length: 64, shard_count: 1024, shards_per_slot: 16, validators_per_shard: 305, min_committee_size: 128 }, each slot has approximately 20 committees of size 244 (rather than 16 of ~300). This shouldn't drastically change the output, but is a better target because it reflects the actual shuffling alg
(cc: @paulhauner)
EDIT2
My original assumption was correct and the spec was incorrect! Go with the original estimates