ZK-Based Ticket System for Store Service Incentivization MVP

Abstract

This document proposes an MVP architecture for incentivizing Waku Store service provision
through a ZK-based ticket payment protocol.

The MVP focuses on core ticket mechanics with a single Service Provider operating six Service Nodes. The architecture implements a ticket-based payment system that prevents trivial fraud,
ensures unlinkability between payment and service provision, and supports batch ticket operations.

This proposal follows a recent Service Incentivisation MVP discussion and builds upon earlier work linked there.

Background

Waku uses request–response protocols like Store to extend its Relay backbone. This enables edge nodes (such as smartphones and browsers) to retrieve missed messages.

Currently, Status runs its own Store Service Nodes. The goal of the MVP is to integrate existing Service Nodes operated by Status as the Service Provider into a new ticket-based service protocol providing unlinkability.

The long-term goal is to extend the protocol to a decentralized marketplace of independent Service Providers. Required developments might include reputation systems, progressive trust mechanisms for Service Provider selection, dynamic pricing, and alternative payment mechanisms.

MVP Requirements and Assumptions

Requirements for the MVP:

  • Security: Prevent trivial fraud; no catastrophic loss from a single session.
  • Unlinkability: Break the link between payment and service provision.
  • Bearer instruments: Possession of a ticket grants service access rights.
  • Efficiency: Support for batch operations; no per-request on-chain transactions.
  • Simplicity: Minimal protocol and contract surface for rapid iteration.
  • Upgradeability: Clear path to improvements.

The MVP makes the following simplifying assumptions:

  • Single Service Provider managing six known Service Nodes, therefore:
    • no Provider deposits or slashing;
    • no Provider discovery and selection mechanism.
  • The Service Provider provides state sync with TicketRegistry for its Service Nodes.
  • Sponsor determines the expiry period for ticket commitments it funds.
    • When Sponsor and User are separate entities, User must trust Sponsor to include correct expiry in funding transaction.
  • Unit-of-service (UoS) defined as a single request-response interaction.
  • Fixed pricing per UoS with no negotiation.
  • Single gasless L2 chain deployment.
  • Single ERC-20 token support.

The MVP focuses on proving the core ticket issuance, spending, and redemption mechanics
without reputation systems. It emphasizes simplicity and feasibility, while establishing an upgrade path to a decentralised marketplace.

Future development will involve a multi-Service Provider marketplace with reputation systems
and progressive trust mechanisms for Service Provider selection.

MVP Architecture

In this section, we describe a ZK-based ticket payment system, which is the central part of the MVP version of the Store incentivization protocol.

Terminology

This section establishes the vocabulary of the protocol actors and their interactions.

The terminology follows privacy literature conventions, avoids direct association with financial instruments, and clearly distinguishes human-managed entities from contracts (using Registry suffix).

Objects

Ticket:
A random secret value representing one unit of service (UoS). A ticket is a bearer instrument:
it can be transferred off-chain between Users. A ticket is used to generate ticket commitments (for funding) and nullifiers (for spending).

Ticket commitment:
A cryptographic commitment to a ticket: commitment = hash(ticket_secret, blinding_factor). A ticket commitment may represent one ticket or a Merkle root of multiple tickets. The commitment is recorded on-chain in the TicketRegistry contract.

Nullifier:
A unique provider-specific ticket spending identifier: nullifier = hash(ticket_secret, provider_id). A nullifier is revealed as part of an eligibility proof. A nullifier can be redeemed for cash in the TicketRegistry contract exactly once. Redeemed nullifiers are tracked in the TicketRegistry contract.

Eligibility proof:
A proof package submitted by a User to a Service Node alongside a request. An eligibility proof consists of a nullifier and a ZK proof demonstrating possession of a funded ticket. An eligibility proof is verified by a Service Node before serving a request.

ZK Proof Statement:

Public inputs: nullifier, ticket_commitment, provider_id, registry_merkle_root, block_number
Private inputs: ticket_secret, blinding_factor, merkle_path

Proof: 
I possess ticket such that ticket_commitment = hash(ticket_secret, blinding_factor)
nullifier = hash(ticket_secret, provider_id)
ticket_commitment is included in registry_merkle_root via merkle_path
ticket_commitment was unspent as of block_number

Service request:
A request sent by a User to a Service Node that includes an eligibility proof.

Unit of service (UoS):
A response to a service request.

Actors

User:
An entity that consumes Waku Store services. The User generates ticket secrets, creates ticket commitments for funding, and attaches eligibility proofs to requests. Users may interact with any Service Nodes managed by a single Service Provider (MVP version). In future versions, Users select optimal Service Providers based on progressive trust assessment.

Sponsor:
An entity that funds ticket commitments received from Users by sending a funding transaction to TicketRegistry. A Sponsor determines an expiry period for ticket commitments it funds. Sponsors may or may not be the same entity as the User.

Service Provider:
An entity that manages one or multiple Service Nodes. The Service Provider maintains blockchain connectivity, syncs TicketRegistry state, and distributes state updates to all its Service Nodes. A Service Provider maintains a unified nullifier set across all its Service Nodes and provides real-time nullifier validation for them. A Service Provider handles batch redemption
of collected nullifiers through TicketRegistry.

The following actors are out of scope:
network observers (ISPs); blockchain entry points (RPC Service Providers); and blockchain operating entities (block producers, miners, validators).

Infrastructure

Service Node:
A Waku node running the Store service and accepting requests from Users. Service Nodes are operated by a Service Provider that may manage multiple Service Nodes. Service Nodes delegate blockchain communication to their Service Provider. A Service Node verifies an eligibility proof as follows:

  • verify ZK proofs against locally cached blockchain state;
  • forward nullifiers to their Service Provider for double-spend prevention.

TicketRegistry:
An on-chain contract that tracks funded ticket commitments, maintains spent nullifier sets to prevent double-spending, and pays Service Providers for valid nullifier redemptions. It also allows Sponsor reclamation of unused tickets after expiry.

ProviderRegistry (post-MVP):
An on-chain contract that maintains a registry of Service Providers. Service Providers must lock up a deposit to join the protocol. The registry handles provider deposits and withdrawals, enforces withdrawal delays to prevent “scam-and-exit” behavior, implements slashing for systematic service failures, and presents observable on-chain metrics that enable Users to discover and compare Providers.

Protocol Diagrams

Flow Diagram

flowchart LR
    User[User]
    Sponsor[Sponsor]
    TicketRegistry[TicketRegistry]
    ServiceProvider[Service Provider]
    ServiceNode[Service Node]

    User -->|Funding request| Sponsor
    Sponsor -->|Fund commitments| TicketRegistry

    ServiceProvider <-.->|Sync state| TicketRegistry
    ServiceProvider -.->|Distribute state| ServiceNode

    User -->|Service request| ServiceNode
    ServiceNode <-->|Double-spend nullifier check| ServiceProvider
    ServiceNode -->|Service response| User

    ServiceProvider <-.->|Redeem batch| TicketRegistry

    Sponsor <-.->|Reclaim expired| TicketRegistry

Sequence Diagram

sequenceDiagram
    autonumber
    participant User
    participant Sponsor
    participant ServiceProvider
    participant ServiceNode
    participant TicketRegistry

    Note over User: 1. Ticket Creation
    User->>User: Generate ticket_secret(s)
    User->>User: Compute ticket_commitment(s) (optionally Merkle root for batch funding)

    Note over User,Sponsor: 2. Ticket Funding
    User-->>Sponsor: Send ticket commitment
    Sponsor->>TicketRegistry: fund(commitments, expiry)
    TicketRegistry-->>Sponsor: Record commitments funded with expiry

    Note over ServiceProvider,ServiceNode: 3. State Synchronization
    ServiceProvider->>TicketRegistry: Sync funded commitments
    TicketRegistry-->>ServiceProvider: registry_merkle_root
    ServiceProvider->>ServiceNode: Distribute registry_merkle_root to all Service Nodes

    Note over User,ServiceNode: 4. Service Request
    User->>User: Construct eligibility proof (nullifier, ZK proof, registry_merkle_root)
    User->>ServiceNode: Service request + eligibility proof

    Note over ServiceNode,ServiceProvider: 5. Service Provision
    ServiceNode->>ServiceNode: Verify ZK proof including Merkle inclusion and staleness window
    ServiceNode->>ServiceProvider: Forward nullifier for double-spend check
    ServiceProvider->>ServiceProvider: Check nullifier against unified set
    ServiceProvider-->>ServiceNode: Accept or reject
    alt Valid proof and nullifier accepted
        ServiceNode-->>User: Serve requested data (UoS)
    else Rejected
        ServiceNode-->>User: Reject request
    end

    Note over ServiceProvider,TicketRegistry: 6. Redemption
    ServiceProvider->>ServiceProvider: Collect accepted nullifiers and proofs
    ServiceProvider->>TicketRegistry: redeemBatch(nullifiers, eligibility proofs)
    TicketRegistry->>TicketRegistry: Verify proofs, check unspent, mark spent
    TicketRegistry-->>ServiceProvider: Transfer payment
    Note over Sponsor,TicketRegistry: After expiry, Sponsor may reclaim unused commitments
    Sponsor->>TicketRegistry: reclaimExpired(commitments)

Protocol Flow

1. Ticket Creation

  • User generates random ticket secrets;
  • User computes a ticket commitment from ticket secrets (optionally as a Merkle root for batch funding).

2. Ticket Funding

  • User sends a ticket commitment to Sponsor (off-chain);
  • Sponsor determines the expiry period for the ticket commitment;
  • Sponsor submits a funding transaction with ticket commitments and expiry period to TicketRegistry (on-chain);
  • TicketRegistry records ticket commitments as funded with specified expiry (supports batch operations).

3. State Synchronization

  • Service Provider periodically synchronizes with TicketRegistry to obtain the latest Merkle root representing funded ticket commitments;
  • Service Provider distributes the latest Merkle root to all its Service Nodes;
  • Service Nodes verify proofs against locally cached state with configurable staleness tolerance, while delegating nullifier double-spend checks to Service Provider for real-time coordination.

4. Service Request

  • User generates an eligibility proof consisting of:
    • nullifier (derived from ticket secret and Service Provider ID);
    • ZK proof showing knowledge of ticket secret
      and inclusion of ticket commitment in current Merkle root;
    • Reference to Merkle root used.
  • User sends a service request to any Service Node
    operated by trusted Service Provider,
    attaching an eligibility proof.

5. Service Provision

  • Service Node receives request and eligibility proof;
  • Service Node verifies ZK proof, including Merkle inclusion and that referenced Merkle root is within acceptable staleness window;
  • Service Node forwards nullifier to Service Provider for double-spend checking;
  • Service Provider checks nullifier against unified set of previously seen nullifiers;
  • Service Provider responds to Service Node with accept or reject decision;
  • If both ZK proof is valid and Service Provider accepts nullifier, Service Node serves requested data to User.

6. Redemption

  • Service Provider collects all nullifiers that have been accepted and served by its Service Nodes;
  • At intervals, Service Provider submits a batch redemption transaction to TicketRegistry, including collected nullifiers and corresponding eligibility proofs;
  • TicketRegistry verifies eligibility proofs, checks nullifiers are unspent, marks them as spent,
    and transfers payment to Service Provider for valid redemptions;
  • If any ticket commitments remain unused after Sponsor-determined expiry, Sponsor may reclaim them from TicketRegistry.

State Synchronization Analysis

State synchronization is required because eligibility proofs reference the current set of funded ticket commitments, which evolves as tickets are funded and redeemed in TicketRegistry. Without synchronized state, Service Nodes cannot verify ticket validity and Users cannot generate valid proofs.

In the MVP, Service Nodes delegate nullifier management to their Service Provider while independently verifying cryptographic proofs. When Users make requests, they prove their ticket’s inclusion in the current state. Service Nodes accept proofs that reference recent Merkle roots, allowing some staleness while maintaining security.

The Service Provider syncs with the on-chain TicketRegistry and distributes the latest Merkle root of funded tickets to all its Service Nodes. The Service Provider maintains unified nullifier tracking and handles real-time nullifier checks and batch on-chain settlement.

This approach provides:

  • Service Node simplicity and operational efficiency;
  • Unified double-spend prevention across all Service Nodes under the same Service Provider in a single-Provider model.

Alternatives Considered:

  • Full blockchain sync at every Service Node would increase complexity and resource usage.
  • Pure optimistic verification without cryptographic state proofs would provide weaker security.
  • Decentralized state distribution would add coordination overhead.

Double Spending Trade-off

The architecture accepts double spending before nullifiers are redeemed on-chain. Strict prevention would require real-time blockchain synchronization, but Users are lightweight edge devices that should not track blockchain state or commit to specific Service Providers ahead of time.

The current design allows ticket commitments to be funded without reference to specific Service Providers.This provides flexibility for Sponsors to fund tickets in batches and for Users to choose Service Providers based on reputation and availability. Ticket nullifier commitment occurs at usage time, not funding time, enabling adaptation to changing Service Provider landscapes.

In a multi-Service Provider environment, a malicious User could use the same funded ticket to generate eligibility proofs for different Service Providers.

Service Providers can balance redemption timisng versus double spending exposure:

  • Wait longer to accumulate nullifiers for batch redemption;
  • Redeem immediately to minimize double-spend risk.

An alternative architecture would link ticket commitments to specific Service Providers at funding time, eliminating double spending risk but reducing flexibility.

Post-MVP Development

Multi-Provider Support with Reputation and Progressive Trust

The ProviderRegistry contract maintains an on-chain registry of independent Service Providers.
Each Provider deposits funds demonstrating commitment to honest service provision. The ProviderRegistry enforces withdrawal delays to prevent “scam-and-exit” behavior.

Providers operate within separate nullifier spaces. The TicketRegistry provides global double-spend prevention via a global set of redeemed nullifiers. Double-spend attacks across different Providers are possible before the first nullifier redemption since nullifiers are tracked locally until redeemed on-chain.

Provider reputation is based on aggregate Service Node performance metrics. Slashing penalizes Providers for systematic failures.

Service Node Discovery and Progressive Trust

A Progressive Trust protocol allows Users to incrementally evaluate Service Providers based on on-chain reputation and local interaction history. Service Providers advertise their Service Nodes
with cryptographic ownership proofs. Providers sign and publish Service Node endpoints along with performance metrics for User discovery. Users select Service Providers based on their evaluation criteria, verify Service Provider signatures, and connect to the Service Nodes operated by their chosen Service Provider. Users gradually increase exposure based on successful interactions and rotate Service Providers when systematic failures occur.

Stronger Economic Incentives

Future economic improvements might also include:

  • Price negotiation between Users and Service Providers;
  • Demand-based dynamic pricing;
  • Alternative payment mechanisms such as probabilistic micropayments or payment channels;
  • Third-party monitoring with challenge-response mechanics.

Call for Feedback

We welcome community feedback on critical design decisions, for instance:

  • Double-spend trade-offs between user flexibility and security;
  • MVP scope balancing implementation feasibility with decentralization goals;
  • Cryptographic model optimizing privacy, proof complexity, and verification efficiency;
  • Economic assumptions and their impact on adoption.

We also welcome feedback on any other aspects of the architecture.

5 Likes

I don’t think this double spend protection scheme is enough.

If the smart contract is not gate-kept and user are anonymous and they can double spend (even a little) then the potential utility gained is unbounded no?

A rational actor would cheat they can’t be identified or punished anyway.

Maybe we could state that each service provider has their own contract created from a factory?

Can you clarify the expectations here? Are the nullifier committed on chain at any time, or only when redeeming?

Why not letting service nodes get root directly from chain? Sounds like we are looking at adding an extra communication protocol here which could be avoided.

Similar to above, we are adding back and forth communications here before a request by using the Service Provider as an intermediate.

Have we considered just sending the nullifier on a content topic over Waku so that all service nodes can monitor it and cache it, so that nullifier verification can be done locally?

I disagree here, complexity is being increased by getting the Service Provider to be a proxy to the blockchain for the service nodes.

I would suggest to review this point and have service nodes check the blockchain themselves for new secrets, and use Waku as a mempool for nullifiers.


Scope looks good, main feedback is that complexity is being increased by stopping service nodes to read state from the blockchain, which seems unnecessary.

Thanks for this write-up! I think this is a great step forward in simplifying our model and finding a practical DoD for the MVP. Most of my questions below are for clarity around upgradeability. I think it’s great to define a simplified MVP, but we need to understand what fundamental limitations we introduce and how we’ll address those in future.

The terminology follows privacy literature conventions

Yes! Thanks for this. Let’s try to stick to these terms (or negotiate new ones where necessary). Please correct me/us when we don’t follow these naming conventions.

that referenced Merkle root is within acceptable staleness window;

To check my understanding: if some user were to use a stale Merkle root, their EligibilityProof would fail and they’d have to recompute the Merkle proof with the latest root? Does this mean that Users need to read the contract to generate Merkle proofs?

Service Provider checks nullifier against unified set of previously seen nullifiers

UPDATE: wrote the paragraph below before reading the “Double Spending Trade-off” section. Keeping it, though, as I believe the main point about upgradeability stands - even if we don’t solve double-spend in a first phase, I think we need to understand the path to get to a working protocol that can’t be trivially exploited.

I guess this is the main thing that I don’t quite understand. If only the nullifiers are provider-specific, what prevents a user from submitting the same ticket to multiple providers (i.e. the double-spend problem)? I understand that we’re making the simplifying assumption that there’s only one Service Provider for now, but I think we at least need to understand what the upgradeability of this payment protocol is for this to be an “MVP Phase 1”. We already signal that we’re preparing for this multi-provider world by considering the provider in the nullifier at all. In other words, does this double-spend problem necessitate the need (eventually) of a channel with semicustodial intermediate nodes? Or perhaps a more complex probabilistic ticketing system? If not, what is the ideal functional protocol that can build on this simplified first step?

An alternative architecture would link ticket commitments to specific Service Providers at funding time, eliminating double spending risk but reducing flexibility.

Indeed, and could be prohibitively expensive if users often switch providers or providers disappear.

A nullifier can be redeemed for cash in the TicketRegistry contract exactly once

I’m back at the nullifiers. Apologies if I’m missing something obvious, but I’m not quite sure that I understand how this works. Why would making the nullifier provider-specific help address the double-spend problem if it will in any case be (only) the first provider that submits the nullifier that will successfully redeem the ticket? In other words, whether the nullifier is:
(a) only a hash of the ticket secret, maliciously submitted to multiple providers, or
(b) a hash of the ticket secret and a provider id, maliciously submitted to multiple providers
the winning redemption remains the first one?

Interesting idea, but this seems potentially more expensive than just having the ticket commitments be provider specific? Or is there some other advantage? @SionoiS

Agree with @fryorcraken here. I think we can (for now) further simplify this model by just collapsing the service node and service provider into a single entity.

I think this is a promising idea and one we considered a few years ago when first looking at these types of models. This could allow all providers to broadcast nullifiers as they receive them. It may be possible (and necessary) to include staking and slashing for offenders.

2 Likes

I was going for simplicity but thinking about it more, I find it not a very good idea.

Ideally a single contract exist this would give us some K-anonymity.

Thank you for the great MVP.

If the ticket_commitment is the public input, then there could be linkability between the committer ID and the spender of the ticket, especially if the sponsor is equal to the sender. Is it OK for the privacy requirements?
One option can be to move it to private inputs, so linkability is achieved between the committer and spender.

Thanks for the proposal Sergei. I like that this approach tries to use smart contract as the responsible actor for the payment handling instead of decentralized custodial payment nodes and trying to eliminate that extra layer. Below are some of my thoughts, questions and concerns:

If only a Merkle root is submitted to sponsor and thereby on-chain, how would it be possible to create a Merkle proof of inclusion of a specific commitment as part of eligibility proof ? I think all commitments have to be recorded on-chain as-is.

I would be really keen to know what is the path to decentralization and if that would require entirely re-designing the protocol. Because in the absence of a centralized service provider, double spend would only be possible when there is immediate state sync and nullifier inclusion in the nullifier tree at smart contract level. And that would be both expensive and slow (cannot serve request until confirming previous spending did not happen).

Oh no ! ZK proof verification would be very expensive on-chain for micropayments. Cost of verifying each proof would very likely exceed the amount associated with the commitment itself. I wonder what is the need for this and why can’t the proof verification be the service node/provider responsibly alone and smart contract only releases payment based on who sends a valid nullifier. Also all commitments should be of same value ? Since the smart contract would not know what nullifier is associated with what commitment while releasing the payment, which can be a downside.

How would this work? Since the smart contract or even sponsor for that matter will not be able to track what commitments are ‘spent’ (the nullifier should not be correlated with a commitment else there is risk of linkability). Only way smart contract should know is with proof of non-inclusion in nullifier tree which probably the user should send to sponsor given there’s already an element of trust between them.

Overall I think another tradeoff here is that this would work with pre-determined values for commitments and that will make dynamic pricing for store requests or decentralized marketplace tricky.

Thank you everyone for your insightful comments!

I will now respond to everything in one reply, clustering comments by topic.

Double-Spend Protection with Nullifier Announcements

I don’t think this double spend protection scheme is enough. If the smart contract is not gate-kept and user are anonymous and they can double spend (even a little) then the potential utility gained is unbounded no? A rational actor would cheat they can’t be identified or punished anyway.

Have we considered just sending the nullifier on a content topic over Waku so that all service nodes can monitor it and cache it, so that nullifier verification can be done locally?

This could allow all providers to broadcast nullifiers as they receive them. It may be possible (and necessary) to include staking and slashing for offenders.

Let us consider a nullifier announcement protocol using Waku for real-time cross-provider double-spend detection.

We could change nullifiers to be provider-agnostic (nullifier = hash(ticket_secret)) and require signed announcements:

  1. Provider A serves user request and broadcasts sign_A(nullifier) on Waku
  2. All providers monitor this topic and cache announced nullifiers
  3. Provider B sees duplicate nullifier in cache and rejects request
  4. Provider A redeems on-chain using announcement signature as proof

For slashing, we could address “announce but don’t serve” offenses through ProviderRegistry contracts with deposits and reputation systems in later development.

Architecture Simplification w.r.t. Service Provider

I disagree here, complexity is being increased by getting the Service Provider to be a proxy to the blockchain for the service nodes. I would suggest to review this point and have service nodes check the blockchain themselves for new secrets, and use Waku as a mempool for nullifiers.

I think we can (for now) further simplify this model by just collapsing the service node and service provider into a single entity.

Let us merge Service Node and Service Provider entities for architectural simplicity. I suggest we refer to the merged entity as Service Provider, reflecting its active role in announcing and redeeming nullifiers. In the implementation, we will likely have a Service Node as a separate entity, but from the protocol’s point of view this is an implementation detail.

ZK Proof Structure Fix

If the ticket_commitment is the public input, then there could be linkability between the committer ID and the spender of the ticket, especially if the sponsor is equal to the sender. Is it OK for the privacy requirements? One option can be to move it to private inputs, so linkability is achieved between the committer and spender.

Good catch! Let us move ticket_commitment from public to private inputs to eliminate linkability.

Ticket Expiry Handling

How would this work? Since the smart contract or even sponsor for that matter will not be able to track what commitments are ‘spent’ (the nullifier should not be correlated with a commitment else there is risk of linkability). Only way smart contract should know is with proof of non-inclusion in nullifier tree which probably the user should send to sponsor given there’s already an element of trust between them.

We could use a user-sponsor collaborative process: users provide cryptographic proofs of non-inclusion showing their nullifier was never redeemed, then sponsors submit these proofs to reclaim unused tickets.

Alternatively, we could remove this feature entirely for better privacy.

ZK Verification Costs

Oh no! ZK proof verification would be very expensive on-chain for micropayments. Cost of verifying each proof would very likely exceed the amount associated with the commitment itself. I wonder what is the need for this and why can’t the proof verification be the service node/provider responsibly alone and smart contract only releases payment based on who sends a valid nullifier.

I’m not sure I understand this: doesn’t “smart contract only releases payment based on who sends a valid nullifier” imply that the contract performs ZK verification? How else would it determine if a nullifier is valid?

AFAIU, we need on-chain ZK verification to cryptographically prove nullifiers correspond to funded tickets. However, we can amortize costs through batch verification.

Storage Architecture

If only a Merkle root is submitted to sponsor and thereby on-chain, how would it be possible to create a Merkle proof of inclusion of a specific commitment as part of eligibility proof? I think all commitments have to be recorded on-chain as-is.

We store both individual commitments and Merkle roots. The TicketRegistry records individual commitments for tracking funded status while organizing them into Merkle tree structures for efficient proof verification.

Pricing and Fixed Denominations

Overall I think another tradeoff here is that this would work with pre-determined values for commitments and that will make dynamic pricing for store requests or decentralized marketplace tricky.

I agree this is tricky. We can start with fixed-value commitments and later extend the protocol to support multiple denominations so users can combine tickets (needs multi-ticket ZK proofs). Handling change could be a separate ZK service (otherwise, users must overpay if they lack exact amounts).

The problem I see is that the unboundedness of cheating makes it impossible to ever slash “enough”.

egg. Put 100$ up for slashing, then make 1K 1$ requests at the same time. Get 900$ worth of request for free.

Sadly I don’t see how to avoid this problem without binding or limiting requests.

Read the research paper again they explain the economic analysis very well.

I agree with the fundamental limitation but I’m not sure I follow your specific example. In the potential reputation system, it’s the Provider who puts up a deposit to join the marketplace. But the Provider doesn’t issue requests, Users do (towards a Provider, attaching a ticket nullifier). A User may try to use the same nullifier with multiple Providers, which may indeed lead to a local double-spending. On the other hand, I’d argue, that for practical purposes:

  • for Store (and potentially some other protocols) there is little additional value in issuing requests to multiple Providers at the same time (you’ll get the same response), and a double-spending request made after a delay will get detected with the help of Waku-based nullifier announcement bulletin board (or onchain redemption);
  • it’s up to the Provider to decide on policy w.r.t. User requests. A Provider may decide, for example, to not accept more than 1 request per second from any given user, which would limit potential damage.

Please let me know if I misunderstand your point.

I misunderstood, I thought the user was slashed.

I agree but I don’t see the benefit of spending time building a scheme that can only work because multiple requests have little additional value. This seams very limiting.

Even if possible anonymously, my point still stand. Every users could send multiple requests at the same time to different providers while following the protocol. It seams not ideal to me.

I agree, but isn’t this a fundamental limitation that can only be solved with blockchain consensus? In other words, whatever scheme we add on top of a blockchain for efficiency and privacy, we’d inevitably have to assume a weaker security model regarding double-spending. What’s the alternative?

Fundamental yes, this is why it needs to be detectable and punishable somehow IMO

I’ve thought more about double-spending and “read” (admittedly, with help from AI tools) the paper you mentioned: Decentralized Anonymous Micropayments by Chiesa et al, 2016.

Here’s my understanding of the core concept. The paper combines a nullifier-based payment scheme (as used in Zerocash/Zcash) with a cryptographic technique called Fractional Message Transfer (FMT) to create a probabilistic payment system. FMT allows a sender to encrypt a message so that the receiver can decrypt it only with a certain probability. The Decentralized Anonymous Micropayments (DAM) protocol applies FMT to signed blockchain transactions: instead of every service request resulting in an on-chain transaction, only a small fraction (with probability p) actually trigger payments. The rest are “null payments” that have no blockchain effect. This approach improves scalability by reducing the number of on-chain transactions, while maintaining the correct expected economics by increasing the value of each individual payment.

A key point for our discussion is that the paper acknowledges double-spend detection is impossible unless it results in a “macropayment” (i.e., an on-chain event). The authors highlight that double-spend detection requires global synchronization (an on-chain trail), but the purpose of off-chain systems is to improve scalability by keeping most activity off-chain.

Since preventing all off-chain double-spending is impossible, the DAM protocol relies on economic deterrence: users must deposit funds that can be slashed if double-spending is detected during the occasional on-chain “macropayment.” While double-spends among “null payments” go undetected, the risk of eventual detection and severe slashing is meant to make attacks unprofitable.

Thinking about applicability to our architecture, I think that we face a fundamental trilemma here between unlinkability, fairness (no punishment for innocent users), and perfect deterrence. If we slash individual users, we’d have to break unlinkability. If we penalize sponsors, we unfairly impact their innocent users.

I think we should choose “unlinkability + fairness” corner of the triangle, accepting that perfect deterrence is not achievable.

That said, we can consider probabilistic nullification as a way to reduce synchronization load. For example: instead of issuing 1,000 one-cent tickets that each require nullification and synchronization (via Waku or an on-chain nullifier set), we could issue 10 one-dollar tickets, where each request has only a 1% chance of consuming a ticket. The expected cost per request stays the same, but we reduce the synchronization bandwidth requirements by 99%.

Also, one imperfect way to address double-spending (within the probabilistic approach) is for the service provider to decrease the parameter p for users who have been caught attempting double-spends. A fair critique here is that malicious users could Sybil-attack the system, since tickets are bearer assets and can be transferred to new user identities. This would raise the barrier for cheaters but would not fundamentally solve the problem.

In summary: I think that if we want (a) open access (no KYC), (b) unlinkability, and (c) off-chain micropayments, we must accept some level of double-spending. If the risk of double-spending is too high, we would need to relax some requirements, such as pre-filtering users by the sponsor (which may happen eitherway if anyone can become a sponsor) or enforcing selective linkability for double-spenders (which feels like reinventing RLN, and where “slashing” only implies loss of privacy but no direct financial loss unless user is self-sponsored).

1 Like

I think this is fine they are cheaters after all.

I think we should just impl. what’s in the paper. The system does everything we need and a lot of time has been spend researching every angle.

Impl. this paper would give us the expertise to maybe modify it later if it doesn’t suit our need perfectly.

I’ve been looking at all the cryptographic primitives used and it’s will be hard work but noting outside our reach.

Why not simply reuse RLN exactly as specified for this purpose? Indeed, we are already working on different ways in which sponsors can provide and distribute RLN memberships (similar to how they might distribute tickets) and slashing should be enough of a deterrence (with or without the broadcasting of nullifiers between service providers) to (mostly) prevent double spending attempts. Wouldn’t this also be an improvement on the FMT scheme, where even “null” tickets could be used to link and slash users (not just “macro-payment” tickets) if double spending is detected?

I can also imagine that RLN (i.e. membership proofs) could be phased in, with MVP phase 1 simply assuming that every request passes membership verification (with weak/no double-spend protection), with RLN then added in a next phase.

IIUC a probabilistic scheme adds the requirement that the total number of requests should be high enough for each provider to make a profit with high probability. This is a reasonable requirement of course, but perhaps the scheme you describe above can be seen as a further enhancement on the “simpler” scheme of just issuing micropayment tickets? Each ticket has a 100% probability to be consumed in the simple system with single provider and only a few service requests, while we phase in probabilistic tickets for a progressively more decentralised system with higher request rates? I’m trying to be explicit about a first phase protocol with reasonable scope, but acknowledging the design/development work that should be done in parallel to grow into a production-ready scheme.

The problem I see with reusing RLN is that the rate limit is global. It could be used to enforce an economic bound BUT this bound needs to be calculated based on deposit size, provider set size and payment value and probability. We would need many RLN membership sets…
AFAIK RLN v2 this is possible! Still doesn’t give us the private accounting needed for this scheme to work though. Also, RLN users can always at any time slash themselves.

Another point is that nullifiers must be broadcasted otherwise slashing can easily be avoided (same problem as using RLN to rate limit on multiple shards).

Any probabilistic scheme is vulnerable to the “I got very unlucky, this scheme sucks!” doesn’t matter the odds.

Yes it should be possible to work on FMT or some probabilistic scheme in parallel.

Yes. :slight_smile: Got there before I could add a similar comment. Not so sure though if the rate here is of much importance - we simply care about double signalling within an epoch (if we assume that all tickets will be redeemed on-chain within the time window of a single epoch). This might again suggest a somewhat modified nullifier scheme that simply reuse the membership mechanism of RLN but does not care about rate limits.

Not sure what private accounting you’re referring to here? RLN would simply allow proving that you’re part of the membership and are not double spending. Anonymity is only broken if you attempt to double spend (or exceed rate limit, which I’m not sure anymore we need).

This seems fine to me? They’ll simply lose access to the service, but tickets they’ve already submitted will be redeemed.

The strength of RLN is that you need only one validator to detect the double spending and trigger the slashing. In other words, even if most providers can’t detect it, if some validator/provider has a broad enough view of nullifiers in the system to slash a good number of double spenders, it should provide enough of a disincentive to prevent most abuse. If the nullifiers are sometimes broadcast, that would be enough of a deterrent in my opinion to mitigate double spending. Of course, we could strengthen that further by adding a full broadcast layer.

RLN gives us a way to anonymously rate limit BUT it doesn’t helps us privately transfer money from users to providers for payments. I’ve been thinking about it some more and the slashing mechanism cannot be used to deter cheaters OR pay providers because of the fact that the user can slash themselves at any time.

RLN cannot work as is since cheaters can get infinite extra utility then slash themselves before any punishment happens. They then register again and repeat.

Even if we were to use RLN only to enforce some economic bound, the UX would still be worst than the rate limit tags used in the research paper since RLN requires blockchain access.

Indeed. I’m not suggesting that we use RLN for payments - it’s simply there to bind tickets to a collateral commitment that can be slashed on double spend. This is similar-ish to how deposits work in the DAM paper. The tickets are redeemed separately as in the paper.

As far as I understand the scheme proposed in the DAM paper:

  • each user would need at least one on-chain deposit, probably more. Each deposit is only valid as collateral for a bounded receiver set. In other words, users would need several deposits if they want to expand their chosen range of providers or make use of a different service. The advantage is that deposit commitments are immutable and does not need to updated if the ledger/contract state changes.
  • merchants would need to know the latest on-chain state to validate tickets (even if not winning tickets) to ensure the bound deposit is still live. This is similar to knowing the Merkle root for RLN validation.

For RLN, users indeed need to know the latest Merkle proof of their membership commitment when submitting a ticket. This is a tradeoff, but would arguably be much cheaper and lighter than having to pay both the transaction and deposit costs for every desired receiver set. Also, the idea is that memberships will most likely be sponsored by a third party, something that will greatly improve UX and is seemingly much simpler for RLN (we’re already working on several ways to achieve this) than the scheme proposed in the paper.

In my mind, even with a few tradeoffs, RLN seems to be the simpler route. What am I missing?