Protocol Buffers vs JSON-RPC

In this discussion, I make some points on why we should use protobuf over JSON-RPC for the various APIs that our nodes will expose. This has already started with protocols such as MVDS and RemoteLog using protobuf whereas Waku and Mailservers still use JSON-RPC, a relic from their Ethereum past.

There are several aspects we should look at, however it is important that we do in order to chose a suitable candidate as we work on our protocols. It is important that we pick one with which we can stick and continue to use for the rest of our work.

This discussion was started due to the updating of our mailserver api, there is an open pull request on the subject, see vacp2p/specs#116 for details.


Both JSON-RPC and Protocol Buffers are transport agnostic, this means they can be defined and work over various transports e.g. websockets, http etc. This means both are equal in this.


JSON-RPC is simply JSON, however with protocol buffers we have a variety of forms the data can be represented, including binary and json based formats.


Protocol Buffers has a more formal schema, it is easy to define what the input is for a given query and what the output is. See the below example.

message Foo {}
message Bar {}

service Baz {
      rpc Bat (Foo) returns (Bar);

This however does not exist for JSON-RPC, there is no formalization on how to define the schemas, methods etc. This has been pointed out by MHS in regards to Ethereum.


Another issue pointed out in the thread regarding the use of JSON-RPC within ETH2.0 is that JSON-RPC doesn’t really have any tooling. There is no standard on how to define request, responses nor is there any formal language. Protocol buffers however give us this. The fact that we have an actual specification and proper tools for protocol buffers to me is the most important point.


One of the arguments often mentioned in the blockchain space (especially Ethereum), is the fact that the protocol buffer IP is owned by google. Which could potentially be a bad thing (?), I think this is a non issue however.


One thing Peter mentioned, is that json-rpc is stateful, this may be problematic. I am not sure if this is just ethereum related or more general, however this means that horizontal scaling is completely out of the picture. JSON-RPC is made for two peers sending messages between eachother, which it is good at. It is not really made for client-server relationships however, which a mailserver is.

1 Like
  • protocol buffer IP is owned by google. Which could potentially be a bad thing (?), I think this is a non issue however.

    Why do you think this is a non-issue?

  • Is the associated protobuf schema limiting in any way that JSON-RPC isn’t? Meaning, are we potentially confined by the strict schema that a less stringent method isn’t?

  • We should consider the cache-ability of the method, as optimizing infrastructure will become necessary as scaling becomes a concern

  • Lastly, things like binary help with efficiency, particularly as we optimize for small footprints and resource-restricted devices.

I think the likelihood that google would do something specifically to target vac, or other distributed systems who are using protocol buffers is unlikely and therefore a non-issue.

I mean essentially they are the same, json has a schema of what data types are able to be used, if anything protocol buffer supports more native types.

JSON-RPC is inherently uncachable, it is very client specific. See this comment from MHS.

Protobuf currently support JSON out of the box

We already use protobuf for some JSON-RPC endpoint (this means that we pass JSON and its encoded directly in protobuf).

In terms of performance, I don’t think it makes much difference at this point, JSON is efficient enough for our use case, consider as well that JSON-RPC is really used only locally (We don’t expose any RPC method I believe and mailserver is through devp2p), and not really exposed, so 1 server per client does not need much optimization or scaling horizontally :wink:

I agree that we should definitely cover the enpdoints with protobuf where meaningful though

1 Like

Eth2 will use HTTP/REST via OpenAPI/Swagger to maximise compatibility - with OpenAPI/REST, a lot of infrastructure like caches, load balancers etc can be integrated more easily.

Generally, at this level, protobuf can be a barrier to interoperability (command line tools need decoding, gRPC is tricky, etc). Joining the eth2 train is a feature in some ways.

Agreed, however we can still use protobuf to define json structs although not necessary.