Status MVP - Update on scaling Status Communities

Scaling a single Status Community to 10K nodes

Several teams set out in January 2023 to achieve the ambitious target of providing a Waku network that can scale to ~10K active users per Status Community by the end of May 2023
and provide the minimum number of network services to make such a community viable.
Applying the same strategies to multiple Status communities and shards
will allow us to scale to much larger numbers.
The original requirements and task breakdown is being tracked in an overarching issue and weekly (sometimes twice-weekly) update meetings between the teams involved.
However, to ensure everyone is synced on the bigger picture, it may make sense to take a step back and summarize the conclusions we have drawn so far from investigations
and the tasks that still need to be completed.

Please note that this is not meant to be a complete picture,
but a sync on achieving the scaling numbers per shard as envisioned in the MVP.
This overview draws from the Status Simple Scaling RFC which must be read for a better understanding of the integrated solution for Status scaling.

Theoretical analysis

We started by doing a theoretical analysis (mathematical modelling) of gossipsub scaling under certain conditions.
GossipSub is the underlying libp2p protocol on which the backbone of the Waku network, Waku Relay, is built.
The numbers for the message rate and sizes chosen for the model comes from an analysis of very large Discord servers
and telemetric analysis of Status traffic.

Conclusion

The Waku Relay network can scale to 10K nodes per shard.
If we assign a shard per Status Community,
each community can scale to 10K active relay-only users
requiring roughly 4 Mbps (without 1:1 chat)
to 6.2 Mbps (with 1:1 chat) bandwidth
for a community with ~5 messages per node per hour
and an average message size of 2.05KB.
Realistically most community nodes aren’t active at any given time,
so this approach could scale to ~100K nodes per community if roughly 1/10th of users are active.

Practical simulation

The Distributed Systems Testing have been working with Kurtosis to find a way to simulate a large Waku Relay network
to verify the conclusions from the theoretical analysis above.
An initial analysis by the team matched the theoretical mathematical model quite well.
The team is currently busy with larger-scale and more varied simulations.
The Waku team has also been doing various simulations and created a simulator tool.
Outputs have similarly matched the conclusions of the theoretical analysis at the scales tested so far.

Conclusion

It’s proven difficult to achieve the full scale of 10K nodes in a simulated environment.
However, initial results have been encouraging in matching theoretical expectations quite well
and have given us more confidence in our concluded conditions under which a single Waku Relay shard will scale to 10K users.
Achieving a full-scale simulation of 10K nodes may not be possible within the short to medium term,
but smaller scale simulations still provide very useful results.

What still needs to be done?

Owner: Daimakaimura

The Distributed Systems Testing team have various plans to upscale their simulations, include more Waku clients, protocols, etc.
The simulations focus on nwaku for now.
The next simulation, as set out in nwaku simulation requirements · Issue #108 · vacp2p/wakurtosis · GitHub, includes limiting node connectivity to increase gossip activity and discv5.

Status Telemetry Analysis

Having drawn some theoretical conclusions under which a Waku Relay network will scale
and with some practical experimentation to back these up,
we also did a telemetric analysis of current Status Community traffic
to determine if Status Community usage of Waku is scalable.

Conclusion

Certain message types:

  • MEMBERSHIP_UPDATE_MESSAGE
  • COMMUNITY_DESCRIPTION
  • BACKUP

and to a lesser extent SYNC_PROFILE_PICTURE and COMMUNITY_REQUEST_TO_JOIN_RESPONSE
contribute too much to bandwidth and will not scale.
The COMMUNITY_DESCRIPTION message itself will exceed the maximum Waku Message size 25 fold
for a community of 10K users.

These messages must be split off to another shard
and use lightpush for publishing to store nodes rather than being relayed to everyone in the network.

Most issues related to Waku usage in Status are covered in Scaling Status Communities : Potential Problems · Issue #177 · vacp2p/research · GitHub.

What still needs to be done?

  • status-go: implement different strategy for messages listed above. Owner: cammellos
  • fleets: ensure fleet nodes are subscribed to all message shards. Owner: jakubgs/haelius

Static sharding

Based on the analyses above,
we published an RFC bringing together an integrated strategy for Status scaling in the MVP
based on a static sharding.
This requires manually selecting and configuring static shard(s) per Community.

What still needs to be done?

  • nwaku: allow configuring nwaku nodes with one or more static shards. Owner: LNSD (haelius)
  • status-go: configure static shard(s) per community. Owner: rramos
  • fleets: manually configure static shard(s) for Status Community. Owner: jakubgs/haelius
  • all: dogfood sharded communities

Although not part of the MVP, the work needed to automate configuration is tracked in: Status Communities: Orchestrate static shard allocation and infra deployment · Issue #3528 · status-im/status-go · GitHub

DoS protection

The same RFC describes a basic DoS mitigation strategy for each community shard,
based on a community signature with validation on all relay nodes.
This requires distributing a private key between community members
and manually configuring infrastructure nodes with the corresponding public key
to validate that each message being relayed truly originated from the community.

What still needs to be done?

  • status-go: distribute private key and add signature to messages. Owner: rramos (?)
  • status-go: provide/publish public key for validation in infrastructure nodes. Owner: rramos (?)
  • fleets: manually configure public keys for the DoS protected static shard. Owner: jakubgs/haelius
  • all: dogfood DoS protected static shards

Scalable Store

To provide Store services to 10K users, we need to move from a naive SQLite archive implementation
to a PostgreSQL backend.

What still needs to be done?

  • nwaku: finish the async driver and library wrapper for PostgreSQL integration. Owner: ivan
  • nwaku: complete PostgreSQL integration Owner: cammellos
  • fleets: configure a single PostgreSQL instance for multiple store nodes. Owner: jakubgs
  • all: dogfood PostgreSQL-based store

Discovery

Also described in the Simple Scaling RFC.
Discv5/Peer-Exchange are updated to consider the static shards subscribed to by nodes in the network.
Libp2p rendezvous discovery is proposed as an additional discovery method
to discovery circuit-relay peers during the hole-punching procedure
for nodes behind a restrictive NAT.

What still needs to be done?

  • nwaku: integrate libp2p rendezvous discovery to act as rendezvous point by default. Owner: vpavlin
  • go-waku: use libp2p rendezvous as part of hole-punching procedure. Owner: rramos
  • go-waku (?): enable libp2p rendezvous point by default for relayers. Owner: rramos
  • all: dogfood

Light protocols

Although the Status MVP design assumes a network of 10K relayers,
this may be unacceptable for some nodes due to resource (bandwidth)
or connectivity restrictions.
As part of the MVP we’re working on a new version of Filter protocol and productionizing Lightpush and Peer-Exchange.

What still needs to be done?

  • all: dogfooding new filter implementation on client level.
  • status-go: integrate filter and lightpush for resource-restricted peers. Owner: vitaliy
  • status-go: integrate peer-exchange for resource-restricted node discovery. Owner: vitaliy (?)
  • all: dogfood integrated resource-restricted protocols in Status Community

Miscellaneous

Some work items are left out from the summary above as they don’t strictly belong to achieving the network requirements for a scalable Status Community.

What still needs to be done?

  • nwaku: finish release automation. Owner: vpavlin
  • nwaku/go-waku/DST: continuous integration testing for nwaku and go-waku. Owner: Waku Dev
  • fleets: ownership requirements clearly specified. Owner: haelius
  • fleets: train and transfer ownership from Waku to Status teams. Owner: haelius/jakubgs

What haven’t we mentioned?

I haven’t included a summary of all the work that’s been done so far. Notably I’ve left out huge efforts in terms of achieving proper peer management strategies, Vac helping with specifying the Status Chat protocols, deterministic message hashing, etc.

Conclusions

  • A Status Community can scale to 10K users if it uses static sharding.
  • For now, this requires manually mapping communities to static shards.
  • These shards must also be manually configured on infrastructure nodes.
  • Some Status Community messages do not scale and must be moved to separate control message shards where resource-restricted protocols are used.
  • DoS mitigation requires that each Community must distribute and sign messages with a private key.
  • For now, this requires publishing and manually configuring the public key on infrastructure nodes that must partake in the mitigation.
  • Store services can be provided with a single postgresql instance (configured per community).
  • Light protocols should be available for resource-restricted nodes.
  • Nodes behind a restrictive NAT should use circuit-relay to perform a hole-punching procedure.
  • Such nodes can make their circuit-relay addresses discoverable via libp2p rendezvous discovery.
1 Like

Some feedback after presenting the above to Status teams:

  1. the plan to provide a pluggable validator interface, could take us forward to make use of third party infra providers. Waku can provide basic validation for DoS protection, but providers/platforms could also implement their own if they have different requirements/better solutions than what we can provide.
  2. differentiation in gossipsub parameters between fleet nodes and Community clients could shift some of the bandwidth burden to the fleet (e.g. having a higher number of mesh connections D for fleet nodes) and free up client resources.