Timestamping and indexing in the WakuStore

Context:

This conversation grew out of Ensure consistent resolution of receiver timestamp by jm-clius · Pull Request #815 · waku-org/nwaku · GitHub and relates to indexing and timestamping in the store.

Conversation thus far:

@staheri14

I would still propose that the index itself be a simple, time-sortable hash over receiver timestamp and message content. If prepended by the epoch received timestamp,

  1. If we aim to do so, then it should be the hash of the sender timestamp with the content, not the receiver timestamp as the receiver timestamp is not a correct indication of the time the message is originally generated (especially if in the future we aim to allow cross-store node queries)

  2. I am not still clear on how this can improve performance? is it about finding the Cursor in O(1)? Or to improve the message retrieval time e.g., for time-range queries?
    I think what you are suggesting is entirely an internal implementation detail and does not need to touch the spec and the structure of the Index and Cursor.
    Do you want a hash map whose keys are the Index object? so a custom hash function is sufficient for that implementation. Can you please elaborate on your proposed solution? @jm-clius

@jm-clius
Thanks for the questions, @staheri14! Good to think things through. :smile: My answer below may have gone into too much details, so LMK if anything’s confusing or if you have other thoughts/questions.

For consistency sake, the senderTime should also be in milliseconds resolution otherwise we will get to the same issue as the receiverTime.

It may be a good idea to use the same resolution where senderTime is generated (outside the store), but I don’t think the store should ever modify the senderTime - which should remain consistent across node hops. Since the store simply uses the senderTime in whatever resolution it receives it, we don’t run into the same issue. It’s only receiverTime which is generated in the store that may mismatch between what we write on the wire and what we store in memory, because of how unpredictable epochTime() is. Of course, once we move to int64 milliseconds, it will be applied to both senderTime and receiverTime and there will be no ambiguity.

Spec also needs an update

The spec currently states that receiverTime is “the UNIX time at which the waku message is received”. According to me this still holds exactly. We have only identified that in the nim implementation there could be an internal mismatch between the timestamp representation in-memory (because we use epochTime(), which is inconsistent) and the one we “advertise” in the wire. Using millisecond resolution is simply a way to generate the timestamp (i.e. to follow the spec) in a way that does not cause inconsistencies in nim-waku. IMO this is then an implementation-only fix, or wdyt?

If we aim to do so, then it should be the hash of the sender timestamp with the content, not the receiver timestamp as the receiver timestamp is not a correct indication of the time the message is originally generated.

It should be guaranteed unique inside the store node, so we can use it as the Primary Key field in the SQL database and as the index/hash in our internal hash table (index:message). According to me the only way to guarantee uniqueness is to use receiverTime, as senderTime is outside our control and are most often not used in any case. I don’t think the index need to have any significance outside the store node, so no need for this to be globally “correct”.

(especially if in the future we aim to allow cross-store node queries)

Index is only a way for us to efficiently structure the stored messages internally and to provide as a cursor to the next page inside this store node. I don’t think it will affect cross-store node queries (it’s only used as a cursor for “next page” queries, which will be unique to each store node in the current implementation too, or am I missing something?)

I am not still clear on how this can improve performance?

Currently, each time we receive a query:

  1. We make a copy of the entire message history
  2. We filter the entire message history from this copy to extract the messages that match a filter into a new sequence.
  3. We sort this filtered sequence into yet another copy (at this point we may even have to deduplicate, which will again have to be done across everything and will create another copy of the entire history, but I think if we only apply the filter in 2 once we should be fine without deduplicating :slight_smile: ).
  4. We iterate through the resulting sequence from (3) to find the message that matches the cursor in our query.
  5. After all these copies, we only extract a max of 100 messages to fill the page response.

If we have a hash table that’s naturally sorted by an index key (for example: nim-stew/stew/sorted_set.nim at master · status-im/nim-stew · GitHub)

  1. No need to make a copy of the history when we receive a query, as the in-memory store becomes read-only
  2. Finding the cursor becomes a hash table lookup, which scales well.
  3. No need to deduplicate (unique hashes take care of that) or sort (messages are naturally sorted when they’re inserted).
  4. No need to filter entire history: start at wherever the cursor is pointing and read in the direction of paging (backwards/forwards) until you have filled out a single page of history that matches the filter criteria. Then stop.

I think what you are suggesting is entirely an internal implementation detail and does not need to touch the spec

Agreed! Changing the timestamps to int64 will be a spec change, but this will probably be done beforehand in any case. It may make sense to provide a simpler cursor to clients though - a simple hash value that clients can use to request the next page of history without having to worry about how that cursor is composed. It will also help us not to have to recompute the hash (needed for lookup) when receiving a query.

Do you want a hash map whose keys are the Index object? so a custom hash function is sufficient for that implementation.

Yeah, exactly. The idea is centered around this custom hash function so the Index can be used as an efficient, time-sorted key in a hash table.

@staheri14

It may be a good idea to use the same resolution where senderTime is generated (outside the store), but I don’t think the store should ever modify the senderTime - which should remain consistent across node hops. Since the store simply uses the senderTime in whatever resolution it receives it, we don’t run into the same issue. It’s only receiverTime which is generated in the store that may mismatch between what we write on the wire and what we store in memory, because of how unpredictable epochTime() is. Of course, once we move to int64 milliseconds, it will be applied to both senderTime and receiverTime and there will be no ambiguity.

@jm-clius
My general point is that all the waku messages should be indexed identically across all the store nodes, this means a Cursor obtained from one node can be used to query from another node (or at least discrepancies are not due to some unnoticed type conversions in the json rpc API or other places).
As for the senderTimeStamp, assume the case that a node A uploads a message with timestamp t.wxyz to a store node B via json rpc API, and in the string conversions happening under the hood, B receives t.wxy . If later A happens to want to query from B for the messages published after its own message, A creates a Cursor using its own message data i.e., sets the sender time stamp as t.wxyz (and calculates hashe based on spec description), then its cursor renders invalid on the B side because t.wxyz != t.wxy. (note that in this scenario I am disregarding the receiver time which is by default different at distinct store nodes)

I do not see any harm to specify the desired time resolution for sender and receiver time and I think it is prudent to do so.

The spec currently states that receiverTime is “the UNIX time at which the waku message is received”. According to me this still holds exactly. We have only identified that in the nim implementation there could be an internal mismatch between the timestamp representation in-memory (because we use epochTime(), which is inconsistent) and the one we “advertise” in the wire. Using millisecond resolution is simply a way to generate the timestamp (i.e. to follow the spec) in a way that does not cause inconsistencies in nim-waku. IMO this is then an implementation-only fix, or wdyt?

@jm-clius
Again, for the cursor reusability (explained in the previous comment), I’d suggest indicating the time resolution in the spec.

It should be guaranteed unique inside the store node, so we can use it as the Primary Key field in the SQL database and as the index/hash in our internal hash table (index:message). According to me the only way to guarantee uniqueness is to use receiverTime, as senderTime is outside our control and are most often not used in any case. I don’t think the index need to have any significance outside the store node, so no need for this to be globally “correct”.

@jm-clius
From my PoV, store nodes while being independent entities, they serve the same set of waku messages in a fully decentralized fashion. Later, we want them to synchronize and reach an identical store state. Using waku message’s receiver time disallows this.
Secondly, I do not see why sender time plus hash does not provide uniqueness?
If two messages have the same payload and are published at the same time, w.h.p. are duplicates. We can also bring the content topic into this equation for extra caution.

senderTime is … most often not used in any case

Why not? all the time-range queries target sender time.

If we have a hash table that’s naturally sorted by an index key (for example: nim-stew/stew/sorted_set.nim at master · status-im/nim-stew · GitHub)
No need to make a copy of the history when we receive a query, as the in-memory store becomes read-only
Finding the cursor becomes a hash table lookup, which scales well.
No need to deduplicate (unique hashes take care of that) or sort (messages are naturally sorted when they’re inserted).
No need to filter entire history: start at wherever the cursor is pointing and read in the direction of paging (backwards/forwards) until you have filled out a single page of history that matches the filter criteria. Then stop

I think in the above scenario the query based on the content topic is disregarded. Even if messages are sorted based on time, they should be filtered based on the queried contetTopic.

a simple hash value that clients can use to request the next page of history without having to worry about how that cursor is composed. It will also help us not to have to recompute the hash (needed for lookup) when receiving a query.

@jm-clius

I do not see how knowing how a cursor is composed would worry clients?

Also, recomputing the hash is not related to the Cursor/Index protobuf def. Right now hash of payloads is not recomputed per query, they are computed once at the message arrival and saved for future queries. The same hash is used to populate the Cursor of history responses.
We shall do a similar thing if we want to update the inputs to the hash i.e., we use the digest field of the Index to represent the hash of payload + content topic + sender time instead.