Logos <> Project Alignment and our future

The vision of the founding projects within Logos has always revolved around the concept of the “Holy Trinity,” the three fundamental components of decentralized infrastructure: Durable Storage, Ephemeral Messaging, and Consensus/Execution. An implicit assumption that came along with that narrative was that the implementations of these three pillars would be tightly coupled from the start, meaning that they would seamlessly interoperate with each other, or even be intimately intertwined in their underlying architecture. This document attempts to take a closer look at that assumption in order to determine whether or not it’s worthwhile to maintain. In turn, we can consider what that means for our Organizational Roadmap, the associated projects underneath it, and the broader ecosystem that collaborates alongside us.

These thoughts are a culmination of:

  • Long-standing thoughts/concerns around the progressively divergent technical paths of the three projects currently under Logos: Waku, Codex, and Nomos
  • The introduction and development of the zkVM project and questions around how collaboration/ownership should work with the Nomos project
  • Conversations over time with other Core Contributors around these two topics, and peripheral topics, that involve “how the Org works and grows together.”

I’d like to note up-front that this is just a model and thoughts around it. The reality of how we move forward will be more subtle and a mix of all these options. So here goes…

I’m looking at all of this by attempting to answer two key questions which create potential paths forward:

  1. “How big do we want to be?”
  2. “How early are we?”

The combination of the answers to these two questions sets the path forward for appropriate project strategy. “How big” sets the tone on how large we expect the Logos Collective to span w.r.t. the number of projects underneath it and the potential size of a given project. It also reflects the “investment appetite” of the Collective. “How early” helps us understand the maturity of the ecosystem, the expected innovation and growth throughout, and the rate of its expansion, for which we should prepare as best as possible.

A Mental Framework to Describe Things

Putting it another way, let’s view the Logos/Status projects within the larger Web3 ecosystem as “bubbles” associated with various “tech stack layers.” These layers are to be seen as concentric circles with the center being the “deepest” in the tech stack, and the outer-most or highest layer being considered the “Application Layer,” which caters to software that actually interfaces with the “general consumer.”

The center of the ecosystem is considered “foundational technology.” It’s the groundwork that others build upon. Often in technology stack analogies and models it is referred to as “deep” in the stack. An important consequence of this positioning is that it suffers from the least external constraints, simply because it depends on less things. Each concentric circle as you move outwards depends on more and more things “below” it, thus inheriting the constraints that technology imposes. It is the end goal of applications to build products people want with the features and constraints of the technology available below it.

As the ecosystem grows and evolves, all of the circles grow (this is a rough approximation–we won’t get into all the possibilities for growth and how that’s shown in the circles’ relationships to each other). The projects within the circles also change, affecting their “representative space” within their respective layer and the ecosystem at large. In other terms, you can think about total ecosystem growth as a stressor to projects to which they need to react. They can either “expand” or “harden.” A hardening response of a project is an increase in specialization in order to continuously fulfill its role while an expansion response is to leverage a current working product to a broader audience. You can’t do both easily with the same resources.

This concentric circle analogy is also nice in that as the entire ecosystem grows, the rate at which the outer circles grow is larger than the rate of the inner ones, meaning that the potential total landscape of the application layer will grow faster than the potential total landscape of the infrastructure that supports it. This is just a math fact, but it ports nicely to our analogy. Fun side factoid: this breaks down as you increase “dimensionality” as the volume of a solid approaches the surface area as you increase dimensions.

Under that idea, note that the more foundational a project is, the more likely it is to be able to expand its footprint with the growth of the ecosystem (under the same resource input) because the need to innovate is less, thus you can serve a larger audience with the established tech you have.

This means that as the ecosystem grows and if our projects “stay the same size,” then the distance between it and other projects grows as well, eventually creating enough “space” for another project to sit between, operating as a glue. An example of this is what you see now in the “modularization” meta that’s happening across blockchain infrastructure, and then further specialization and tradeoff-optimizations that exist across the competing products. A company like Espresso Systems couldn’t exist in the Bitcoin era because the network was basically modeled as a distributed monolith.

Project Adaptation in this Model

If you visualize a given project as “taking up space” like a bubble within the ecosystem, and that bubble expands as the ecosystem expands, then various things can happen as a result of that expansion. Assuming the same amount of resources are applied to a project, they will need to specialize further or broaden their area of service. It is important to note that this is fundamentally a function of the industry and how it’s moving. The ability to broaden the bubble (assuming constant resources) means that their solution is ossified substantially in the ecosystem.

Now let’s look at our concepts through this model to see how things may be affected.

Big vs Small

This concept is asking ourselves “how much of the space within these circles do we want to occupy, and how much should we grow with the expansion of the industry?”

When the ecosystem expands, space is created that has potential to be occupied, which can be considered “open opportunity” within the ecosystem. That eventual occupation either comes from a project expanding its footprint or by new projects filling in the gaps. Is the org’s strategy to occupy key positions and maintain them (small), or to continuously grow and adapt with the ecosystem’s needs and opportunities (big)?

Early vs Late

This is a sentiment around “how much do we think the tech will change and grow” over time from here, or “how ossified is the tech”? If we’re late to the game, then underlying mechanisms are reasonably understood and in place which leads to lower risk in the coupling of differentiated projects. Basically, you’re less likely to identify a needed breaking change if things aren’t changing that much.

Furthermore, the “total space” of the industry doesn’t expand substantially from this point, thus leaving our understanding and “project coverage” to remain similar to where it is today, thereby allowing projects to remain “close” to one-another. This means the circles aren’t going to change in size or shape in any considerable way, and making bets on “how things fit together” has a lower risk of being wrong.

The alternative to this is that we’re early, and we expect the “ideal implementation” of a given project market to be far from the current State of the Art, which requires rapid innovation and experimentation to stay up with the bleeding edge. An appropriate strategy here would be to “settle” in strategically safe locations (the holy trinity) and figure out the “glue” as the ecosystem expands.

What’s the best long term alignment direction?

Here we discuss what each of these options means and the Logos strategy that is aligned with it. I’ve summarized a bit (grossly) in a punnett square below, but that won’t capture all the details we’ll go through (a lot of) them too.

Small and Late

We assume that the Logos Collective will not expand substantially compared to the total ecosystem’s landscape and that the current landscape is relatively slow in its growth from this point forward.

A cogent strategy under these assumptions means that we pick key strategic positions and dominate them with the resources at our disposal. Additionally, our positions that exist must work very well with one another. This is facilitated by the assumption that methodologies of interaction won’t change much anymore; focusing on interoperability won’t hinder our future needs to “stay up to date” with other optimizations associated within a specific domain. (i.e., our shit won’t break or become obsolete later because of interoperability choices we’ve made today.)

Small and Early

We assume that the Logos Collective will not expand substantially compared to the total ecosystem’s landscape, but we also expect that the ecosystem will expand substantially compared to how it exists today.

This reinforces the importance of the positions we choose to occupy, as the niche we occupy will need to provide enough value throughout the ecosystem expansion to warrant lasting relevancy. It also forces our value proposition relative to the entire industry into esoterism, unless we pick foundational positions that we believe will remain foundational.

As the ecosystem expands, the gaps between project positions will become larger. Our decision to remain “small” means that we need to be ok with other projects outside of Logos to serve as the glue, as the resources allocated to any given project need to be focused on “hardening” themselves to remain relevant in a changing world.

The consequence is a divergence in “collaborative ease” as projects align themselves with the best practices of their respective bleeding edges in order to stay competitive. In other words, you’ll need more glue to stitch projects together. You’ll also continuously reduce the likelihood that a method within one project can be seamlessly reused in another because they’re independently optimizing for different things.

Big and Late

We assume that Logos Collective will continue to expand its position within the broader ecosystem, and that we have a pretty good idea of what that looks like from here on out. Furthermore, things won’t change too much within the broader ecosystem and that the current landscape is relatively slow in its growth from this point forward.

Because we know how things work, the things we build should work together without much effort.

Big and Early

We assume that Logos Collective will continue to expand its position within the broader ecosystem, and we also expect that the ecosystem will expand substantially compared to how it exists today.

Because we are early, it is likely irresponsible to think that what the State of the Art looks like today will remain the same in the future, which mandates our ability to adapt and continuously evolve our products in order to stay competitive in their respective markets.

My Personal Opinions

I find us sitting firmly in an “Early” ecosystem but in the middle of a “Big” and “Small” organization, which will eventually be decided by available funding, the “Founders’ Vision,” and hopefully conversations that stem from this post.

I personally believe we’re “Big/Early” which has a few consequences. I’ll try and get to some of them down below.

The extension of the Logos Collective from Status was a precise attempt to establish key foundational positions lower in the stack. This is usually portrayed via the continuation of Ethereum’s “Holy Trinity” narrative (but with stronger privacy in our case). We believe that no sufficiently good decentralized application can be built without the main three pillars: Storage, Messaging, and Agreement/Execution. The Status Applications clearly stand within the application layer to consume that infrastructure as it becomes ready to give users an experience that is currently unavailable elsewhere.

It’s clear that there is overlap of functional requirements across these projects. For instance, Nomos needs distributed data availability and ephemeral messaging in order to work. Why not just use the other projects to do this? Seems like the obvious answer, right? But do the generalized private messaging decisions associated with the Waku roadmap make it too inefficient for the Nomos consensus mechanism (e.g. leading to too long of a time to reach a consensus decision)?

As we look at the rest of the org and various projects within it, and as the total ecosystem expands, we’re seeing a divergence in technical compatibility represented by a growing requirement of additional middle-ware that needs development. An example of this is the “chat” abstraction/optimization on top of Waku, to be consumed by Status for their more constrained needs. This work represents “a new bubble” in between projects we already have, and the subject of “who owns this work” has been a point of contention for quite some time along with the question, “what is it called?”.

Here are a few more examples of organizational difficulties around this topic:

  • Technical alignment and coordination of Nescience (zkVM team within Vac) and Nomos
  • Nomos exploring the creation a mixnet on top of libp2p as opposed to relying on Waku protocols for network-level privacy
  • More generally, no Logos Collective infrastructure project (and Nimbus) using Waku for message passing
  • Codex recreating an ethers.nim interface to blockchains instead of using what’s been built by Status Desktop

It’s clear that we want applications to use these foundational infrastructure projects to serve users, but it isn’t clear how well the individual projects work with each other and how much room should be left in between them for “glue projects.”

If we are leaning towards an “early” ecosystem viewpoint, then a project that is attempting to live on the bleeding edge of their respective market should work to stay there and remain competitive, which means potentially making optimizations that move away from cross-project compatibility. That being said, when these things happen, it should have an explicit technical justification for such a move. This allows the in-between glue that will eventually be needed for cross-project integration to be better understood and planned for. These justifications, as far as I’m aware, don’t exist or are not easily identified if the question is brought up.

Another factor to take into account is that the further generalized a project becomes, the less immediately useful it is in an optimized instantiation, thus the further away it moves from the integration into a specific project that may require optimization for a niche purpose. In other words, a more generalized project makes the bubble it takes up in the ecosystem larger as it has more potential use cases, but it also increases all of the distances between itself and any optimized use-case, thereby requiring additional development or, worse, outright incompatibility.

If the approach is decidedly “Big/Early”, it seems reasonable to assume that foundational projects may not be directly compatible with each other over the long-term because of the choice to remain competitive and useful within their own domains. We need to be ok with their needing to be glue between them, and whether or not we also develop that glue is a question that remains to be answered and will probably be a case-by-base basis.

Other factors not considered here but are clearly important

This is already a long post and I’ve missed some probably very key points to this decision and alignment. To name a few:

  • Individual business cases vs Logos mission
  • Financial allocations
  • Investment portfolio strategy

But it should serve a good starting point and language to continue the conversation. Please discuss: Tell me what I’ve gotten wrong, your opinion of how we should work and grow together, what I’ve missed, additional pros/cons, etc.

8 Likes

One thing that still seems vague to my own understanding is the role of each BU in the future. Are we intending that they remain under the Logos umbrella, or will they set their own sails at a certain point?

I noticed a big focus on identifying where we are in the ecosystem timeline. However, I believe that each program has a unique position within its own ecosystem.

Even questioning if they still encircle each other from a use-case perspective. Looking at a technological perspective, the similarities and shared values stay aligned and encircle them easily. The potential users, clients, and specific locations where this tech would be applied seem wide afar (Which I believe is a good thing).
This flows into your question, “How does the org grow together?”; Does it even make sense to have, outside of the core infrastructure of the projects, them so aligned and glued together when they serve widely different end purposes?

I believe the tempting question of “How big do we want to be?” is an important one that resonates with me. Whatsapp had a team of 55 peers in 2014 (before FB buyout); however they managed to grow their userbase to roughly 400mil users (according to this source). Obviously, they had a clear product-market fit, and they built upon tech of giants as a consumer application. Nonetheless, it’s clear that “big” can be defined in multiple ways, looking both at team size and userbase size. These points I’m not bringing up to bring shade to our own production, userbase, or effort. Rather to signify the importance of fine-tuning our mutual efforts rather than particularly expanding. There’ll always be use cases for expanding and growing, at least until the funds runout.

I am happy we’re having a forum-formatted discussion about these points, as I also deem them critical. A team can’t function well when we’re not all running towards the same goal together.

4 Likes

I strongly believe in “start with the end in mind”. And while this is a very hard thing to do when speaking of foundational tech, protocols, and infrastructure projects, what could be the “glue” is the application layer where we have user-facing products. Because in the end, what we create is meant to be used by people, giving them the tools to self-govern, and become self sovereign citizens of a network state.

With the lab, we create prototypes of user-facing products, using the solutions our BUs create, as to limit the energy spent in the bigger org to imagine abstract ideological concepts by turning them into easily digestible illustrations.

This could create valuable feedback loops throughout the organisation, spanning different skillsets, talents and expression forms, and could express a shared story we can all contribute to.

I believe it is easier to just say “wait ill show you” than having people listen to hour long talks or read 100 page documents to understand our vision.

6 Likes

I see logic in applications being the “glue”. It would only make sense that this way, these different end goals could be aligned. However, I don’t necessarily see Logos building that glue (outside of the lab, there’s definitely a need to show use cases and prototypes to peers before they’ll do their own thing).

Continuing the thoughts of starting with the end in mind, something that plenty of startup books will make a mantra of. Something I’m also still trying to understand is if it is our end to become public infrastructure or are we aiming to profit from our infrastructure? Both deeply different goals, nonetheless aligning with what we’re currently aiming for.

1 Like

Nicely summarized. There isn’t anything I disagree with outright and my perception is similar, where closer to the big and early quadrant.

Personally, I see the logos projects as addressing fundamental problems in their own respective areas. The level of maturity across the entire spectrum varies greatly, for example messaging is a largely neglected topic across the board, both in research and implementation. This brings significant challenges in understanding how to solve the unique challenges of spam prevention, which might consume the majority of the resources.

Same goes for storage, there are simply no good solutions out there we can just build on or reuse to get us where we want to be. Thus, a large amount of resources and time is dedicated of devising solutions to this fundamental problems that aren’t (fully or at all) addressed elsewhere.

This is the case with all deep innovation.

All of it to say that interoperability and compatibility between the different projects within the org might be extremely challenging if not impossible at this point.

In terms of glue and user facing applications, I believe this can be a combination of both, inhouse innovation, often in the form of prototypes and external collaboration across the ecosystem, this is why having broad compatibility with other external protocols might be more beneficial in the initial stages than compatibility and interoperability with logos projects.

5 Likes

Nice framework to structure dialogue around the real-time decisions that need to be waded through re: development of the Logos projects’ in terms of the Logos mission.

I think we are in various growth phases of Big/Small (Medium) determination for each of the Logos projects and the same in terms of Early/Late ecosystem competition. The example you gave of shifting internal incompatibility between the projects is a fair assessment of things as projects are evolving now.

It’s definitely not as easy as let’s “plug Waku for messaging into Nomos” or “let’s give Nomos data by plugging Codex in”. The needs of Nomos are more nuanced than that and Codex is not about data publication but rather more about efficient and stronger guarantees around the persistence of data (data durability). To echo what you said - Waku and it’s generalized privacy design decisions affect it’s messaging performance (in the context of consensus for a private L1 which seeks to be as performant as possible w/ it’s privacy constraints for multiple pieces of it) and make adoption not possible within the technical constraints/requirements of how fast Nomos’s consensus requires the messaging layer to go to settle.

How projects continue to evolve and become “bigger” is something I believe will be made clearer for Codex as it grows up we get closer to the end-of-year testnet release and start releasing versioned binaries and build a sense of “what the market actually demands” sort of feedback from developers who engage with us to compare it with the project lead’s (Dmitriy’s) vision and balance that with the founder’s vision (Jarrad’s).

Ingesting that feedback from the beginning of next year is important to how we can reflect upon building pragmatic decision tree options to ensure communications and business development effort best takes advantage of building the paths within that tree to best drive the car Codex is developing into once the proverbial “rubber meets the road”.

Something very important to me is that we (the Founders, Program Owner, Program Managers, Project Leads and all other supporting BU leads) have a strong shared mental map of the impact that business/architectural design decisions may have but always try to err on the choices that remain as close as possible to the Logos ethos & mission but at the same time don’t suffocate projects from building diversified revenue streams or continue with research lines that can bring orders of magnitude performance optimizations to critical components of projects (when I write this I think of the proving system in Codex that has taken 1.5 to 2 years depending on who you ask when it started but can be a critical technical differentiator of Codex to other storage projects).

I believe we are still early enough for the three projects Codex, Waku and Nomos but we need to continue to move with the same haste and agility that has gotten all of the projects to this current point in each of their different development phases.

Agree w/ Michelle’s point about starting with the end in mind (and to Mf’s point - it would be good to have a clearer understanding of how each project lead and the founder vision can align as something like Codex aims to disrupt Filecoin and seek to grow it’s bubble possibly pushing against that second concentric circle’s bounds) and the glue being applications (and I would add middleware) that consumes Logos project infra.

Thanks to everyone’s hard work we’re getting there slowly but surely and definitely moving the needle towards modular infrastructure for building the Logos network state vision.

5 Likes

Another way to see is looking at the main driving force for each project.
All projects are expected to have a path to sustainability, which means being successful by itself,as a singular pillar.
In this case, interoperability can have 2 benefits:

  1. Leverage another project’s users/market penetration
  2. Reduce total effort by leveraging work done by another project

(1) does not really make sense at this point in time as all projects are very early and have very little/no market penetration. It would actually be a risk to optimize for interoperability to leverage unrealized market shared instead of focusing on own market growth.

(2) is something that requires more coordination to provide visibility across projects of what could be re-used. I would actually advocate for more of this.

In terms of sharing effort (2), we have 2 types of output that could be re-used: research and software.

On the research side, Vac is how we re-use effort. For example with 46/GOSSIPSUB-TOR-PUSH | Vac RFC that can (is?) used by Nimbus and could also be used by Waku in the form of 47/WAKU2-TOR-PUSH | Vac RFC

In terms of software, Codex and Waku are using output from Nimbus, especially in terms of nim ecosystem. I suggest a stronger effort from Logos here to facilitate sharing of common nim library by the creation of a Nim ecosystem team that can own and maintain common libraries (is there a lesson to be learned with ethers.nim? I have no knowledge of this story).

However, this effort to share output and resources will always be capped by the maturity of each project.
A BU’s critical path to sustainability may not include the features needed by another BU. Such feature may part of the problem domain, but the timing may not aligned.

E.g. covert traffic might make sense for Waku (don’t quote me on this) yet is not part of the critical path to MVP/sustainability. However, it’s needed for Nomos’ critical path to MVP.


Zooming out, comparing the effort to the Web3 trifecta. What is the end goal? Success as individual pillars or cohesion as a whole? We can look at the original attempts to learn from it. Ethereum did not need Swarm and Whisper to be successful and it’s only in the recent years (4-5 years after launch) that the needs a Swarm (data availability domain) and a Whisper (social web3) have raised.

In terms of risk management, it makes sense to decouple dependencies and risks to maximize the success of each project and a void a “if A fails then B will also fail” scenario.


As previously stated, we may look for cohesion/glue/interoperability at app level. The Status app is the attempt to bring all technologies in one and demonstrate this cohesion.
However, back to the (1) and (2) angles. Status’s app critical path to success/MVP does not include Codex and Nomos. At the same time, Waku’s path to sustainability should not wholly rely on Status’ app market share and success.


My conclusion?

  1. Avoid dupe effort where possible, Vac is a good attempt to this but I think it can be pushed further, especially in regards of Nim ecosystem but also in terms of research output from non-Waku BUs.
  2. BUs should aim to consult each other or at least keep an open line of communication when working in the domain of another BU.
2 Likes

I also can’t stop thinking about how much bigger this success could be when storage and messaging would actually work. There are tons of early projects that are in a holding pattern because of those missing components.

In my perception, Ethereum became a platform for DeFi and “Bankers 2.0” because of those missing parts, instead of becoming the backbone for revolutionising social media, democracy, and unlocking long overdue cypherpunk dreams.

Let’s think past “problem solving” to start dreaming about the possibilities a fully working trinity could bring.

5 Likes

Maybe we can have a two-weekly moment with BUs leads?

2 Likes

The weekly update delivered by each project to the leads-roundup channel within the Logos Discord serves this purpose, at least foundationally. It was structured this way to avoid the overhead and logistics of “another meeting” for teams to join while keeping project information flowing and available for everyone at a reasonable cadence.

That being said, it only works if people read it, which allows for needed cross project follow-ups to happen at the time they’re needed. For now, since I don’t see much interaction or feedback (especially cross project interaction), it serves the purpose of me keeping track of each project and creating monthly reports based on that and other information I may be privvy to that isn’t captured in the weekly updates (hiring, milestone changes, DM convos, leadership thoughts, etc). These monthlies can be found on the roadmap site.

If the structure of the weeklies isn’t good enough to instigate cross-project conversations, let me know and we can move into that direction.

I acknowledge that we lost the “RAID” part from the previous Google sheet driven monthly Status report.

At Waku, we spent time every month to update the RAID section but did not feel it was read. I am now working towards re-adding that as part of weekly update.

I think the RAID section should be the minimum interface for BU coordination, to highlight blockers and dependencies.
Maybe the issue with the “blocker” wording in the current template is not appropriate. IMHO, RAID framework makes sense to track the long term dependencies and risks.

1 Like

I want to add a little bit of color to this point.

I see the Logos stack eventually becoming fully interoperable. But due to varying project maturities and timelines, we’re making practical choices to deliver core functionalities independently. For instance, Codex can temporarily leverage existing execution layers. This speeds up iteration and lets us build a user base without compromising future integration with our own execution engine. So, each project can advance independently but with the aim for tighter integration down the line.

2 Likes

There are many lines of thought that have lead me to the conclusion of Logos and its intended application, I’m sure I’ll omit some crucial aspects but I’ll try and construct history and convey what Logos is and why it matters. It is expected of everyone working on Logos projects to understand and commit to this.

Diving into the Net

It all started for me at these pop up weekend markets. In Australia we called them swap meets, I think Americans call them flea markets. It was often a car parking space filled with people selling their goods out of the boots of their cars. What caught my attention were the guys selling shareware and pirated games on cassettes as well as 5¼ and 3½ inch floppies.

Using my pocket money I became a regular customer, and through them they taught me how to connect to Bulletin Board Systems (BBSs), where they were finding this software.

Around this time our family had moved further into the country, with little means of physical access to my friends from school, I soon found myself retreating into the net, the majority of my 90s childhood was spent glued to my CRT screen, my parents complaining they couldn’t use the telephone as our modem whirred away over the line.

“Information wants to be free” - Stewart Brand

Shortly after my access to information grew, I would learn how to setup DCC receive/send bots on IRC, leech from FTP servers as well as Usenet. Through “the scene” I found my kindred spirits, patterns of thought that I hadn’t encountered in meat space, a perspective on the world of one that appealed to my prepubescent sense of rebellion.

Through this social circle, I began learning how the craft of cracking, using SoftICE for cheating games and bypassing license checks in software. I became aware of International Subversives and caught the tail-end of Phreaking creating beige boxes, went pit diving, using Telecom’s engineer codes and entering PABX systems. I eventually made my way to lurking on the Cypherpunk mailing list. Here I became exposed to concepts of Cryptography, Agorism, Counter Economics, Austrian Economics, Crypto-Anarchy, Anonymous Remailers & Data Havens.

What I didn’t realise at the time, except maybe by faintest of intuitions was that there was something deeper that was happening. It wouldn’t be much later in life that I would develop an appreciation for Piracy, or more specifically the techniques and tools that cyber-pirates were using.

In “The Pirate Organization: Lessons from the Fringes of Capitalism”, Rodolphe Durand & Jean-Philippe Vergne argue that piracy drives capitalism’s evolution and foreshadows the direction of the economy - they are drivers of innovation.

The authors argue that historically, pirates are the ones who forge the path forward. Operating in grey areas, qualities such as speed, inventiveness, and agility are necessary for survival and play with the concept of state sovereignty, and consequently expand the reach of capitalism as their behaviours become formalised and integrated into the system.

The point here is that piracy operates on the fringes of capitalism and of the law, this frontier for human activitity is one of the most interesting places for human creativity and hyper competition, and allows you to see into the future.

That Piracy is not something to ‘squash’ or be afraid of, that the behaviour was exposing and exploiting inefficiencies in our social systems that ultimately improves the flexibility and adaptability of the system, expands the frontier and unlocks new modes of commerce and creates more value.

Virtual Societies & Economic Realities

As I mentioned being physically isolated, I relied on the Internet for a large part of my socialisation.

I participated in precursors to the ‘Metaverse’, I participated in Multi User Dungeons (MUDs) and MOOs - these were mutiplayer chats - text based virtual worlds, and by the mid 90s I was participated in Active Worlds (a precursor to Second Life) as well as Ultima Online. Both of these programs are main motivators for me learning C and writing more sophisticated bots.

They were microcosms of human behavior and demonstrated similar patterns of societies. The need for governance, negotiating property rights and economy manfiested in very real ways. The case of LambdaMOO is a great example in the book “Crypto-Anarchy, Cyber-States & Pirate Utopias”

What these virtual societies showed me is that the mechanisms and institutions for social coordination created to solve problems and mediate life in these virtual worlds looked similar to those found in the real world. In these virtual worlds, these mechanisms and insititutions could be rapidly experimented with and deployed, new social organisations could be formed. A concept which I would then learn to be called “competitive governance”.

The question then became, if real world institutions could work in virtual worlds, could insitutions made for virtual worlds, work in the real world? True enough, enforceability within virtual worlds is a lot stronger (by virtue of being code), aside from that, I couldn’t see any reason why not?

This idea became more real to me when Edward Castronova, an economist, published the paper “Virtual Worlds: A First-Hand Account of Market and Society on the Cyberian Frontier” showing that Virtual Worlds like Ultima Online were having economies that outcompeted Nation-States.

A similar argument is made today with Big-Tech companies who have more users and more revenue than GDPs of entire countries.

Proto-Sovereign Systems

Takedowns on central repositories (ie Topsites) lead pirates to adopt p2p technologies. We saw protocols and clients like Direct Connect, Napster, Limewire, Edonkey, Gnutella, Soulseek, Kazaa, etc. become the forefront of the scene, moreover they combined it with easier UX making these networks more accessible to a wider audience.

While most of the networks eventually hit their demise, usually through their design, the underlying idea of distributed systems that could be resilient to adversaries, resist coercion and operate in hostile environments.

Soon we would see Bittorrent, and to this day Bittorrent accounts for 3% of all internet traffic.

Upholding Rights with Code

Another approach was to decouple and generalise the activity itself from these p2p communication protocols and strengthen them, this idea is expressed in projects like TOR & I2P which advocate for censorship-resistant, anonymous communication, the upholding civil liberties such as Free Speech and Right to Associate, by making the network blind, politically neutral, to the contents flowing through them.

This generalisation and complexity began to grow, with projects like Freenet combining messaging, distributed storage and a Web of Trust - forming a decentralised tech stack that allowed for decentralised applications such as micro-blogging and version control.

GNUNet would take the concept further providing a software framework and primitives for decentralised applications such as payment networks, decentralised identity, filesharing and messaging.

Again these networks persist through to this day, and they have shown they are capable of upholding the properties of these systems.

These protocols were material manifestations of a body of thought that are exemplified with John Barlow’s Declaration of Independence in Cyberspace, with Eric Hughes Cypher Punk’s Manifesto and with Tim May’s Cryptoanarchist Manifesto.

In “A Declaration of the Independence of Cyberspace,” John Perry Barlow asserts that governments have no authority or right to control or govern the online world known as cyberspace. He argues that cyberspace is a separate and independent entity, created and governed by its users through collective actions.

Barlow criticizes governments for their lack of understanding and engagement with the online community, claiming that they do not have the knowledge or legitimacy to impose their laws and regulations on cyberspace.

He emphasizes the inclusivity and freedom of expression in cyberspace, rejecting the notion of physical coercion as a means of governance. Barlow also criticizes specific laws, such as the Telecommunications Reform Act in the United States, which he believes infringe upon individual liberties and violate the principles set forth by influential figures like Jefferson and Washington.

He warns against attempts by governments to control or restrict access to cyberspace, arguing that such efforts will ultimately fail in a world where information can be freely shared across borders. In conclusion, Barlow calls for the creation of a more humane and fair civilization within cyberspace.

Eric Hughes’s A Cypherpunk’s Manifesto argues that privacy is essential in the electronic age and distinguishes it from secrecy. It emphasizes the importance of freedom of speech in an open society and the power of electronic communications.

Hughes suggests that to protect privacy, transactions should only reveal necessary information and advocates for anonymous transaction systems. He also highlights the role of cryptography in ensuring privacy by encrypting communication and protecting identity.

The manifesto calls for individuals to defend their own privacy and build anonymous systems, with cypherpunks writing code and publishing it for all to use. It rejects regulations on cryptography, asserting that it will spread globally. He emphasizes the need for privacy to be part of a social contract and invites engagement from others to make networks safer for privacy.

In Timothy May’s “The Crypto Anarchist Manifesto,” he discusses the potential for computer technology to enable individuals and groups to communicate and interact anonymously.

He explains that this will have significant implications for government regulation, taxation, information security, trust, and reputation. The technology for this revolution has existed in theory for a while but is now becoming practically realizable with advancements in computer networks and personal computers.

May acknowledges that the state will try to slow or stop the spread of this technology due to concerns about national security, criminal activities, and societal disintegration. However, he argues that crypto anarchy will inevitably spread and fundamentally alter the nature of corporations and government interference in economic transactions.

He compares this shift to how printing reduced the power of medieval guilds and how barbed wire changed concepts of land and property rights in the frontier West.

9/11 - The Rise of the Surveillance State

What made the cypherpunk messages real to me was the orchestration of & response to 9/11.
At this point Western liberal democracies governments unhinged themselves completely from their people.

The rise of mass surveillance and the surveillance state became ubiquitous, and it was only due to brave men that we were even aware that our civil liberties were taken away from us, and that Western Civilization had been undermined. The capacity (and application) for Tyranny has been steadily growing since, and if the people don’t do anything about it, it will be too late for our children.

Temporary Autonomous Zones, Virtual States & Cyber-states

By my mid-teens I had gotten a hold of a book “Cryptoanarchy, Cyber-states, and Pirate Utopias”, and another book called “Virtual States” by Jerry Everard. Along with Bruce Sterling’s “Islands in the Net” and Hakim Bey’s “Temporary Autonomous Zones”. There was a growing body of people who started to see these parallel threads coming together. Even Leslie Lamport saw this in 1988 saw what was possible in his paper “The Part-Time Parliament”.

That there was the potential for a new order, a new power, a new system that could rival nation states, one that had the competiveness prowess and adaptability of a Pirate Organisation, one that was extra-legal and globally accessible. A system that would change the direction of humanity forever.

But where Freenet and GNUnet stopped was using these technologies to create practical institutions and to bring them into the real world. For that we had to wait for Hashcash and Bitcoin. The seperation of State and Money. Here I became aware of Game Theory & Mechanism Design - Crypto-Economics.

With Bitcoin, we now had the proprties of distributed systems, one that upheld rights, could operate in a hostile environment and resist coercion, but could also secure and transmit value anywhere on the Net - it had the basis for a virtual world economy that could rival the GDP of a country, but its application was in the real world. Moreover it worked.

Exit, Voice & Loyalty

Post 2008 Financial Crash, Occupy Wallstreet & Bitcoin arrived in a similar timeframe, and could be viewed as 2 forms of protect under Albert Hirshmans Exit, Voice and Loyalty.

The Occupy movement arose through the Arab Springs, which itself precipitated by another cypherpunk project - Wikileaks, a hack on the state in order to improve governance, by the use of truth bombs to expose and partition the conspirators social graph and inform the citizenship of reality for better democratic governance.

The protesting against the financial elite eventually made its way onto American Soil, and became known as Occupy Wallstreet. This was a time where the Left had an economic argument, wasn’t coopted by the regime and distracted by identity politics. Ultimately they took up Voice, or Protest, which utlimately didn’t go anywhere and had the elites dismantle their movement - Micah White would concede defeat by writing the book “The End of Protest”.

On the other hand, Exit, peacefully opting out of the system, such as Bitcoin has spawned an entire industry that at its ATH commanded $3 Trillion value, and rests at around $1T today.

Here we see the political potential of our technologies.

Our History - Towards a real world Sovereign Stack

It was around 2010 that all of this really began to click for me, and I started committing myself to Crypto, similar to filesharing->anonymous messaging - it was time to generalise the application from the technology (public blockchains) and the real magic is creating a decentralised technology stack.

There were attempts to push Bitcoin Script as far as we could, and projects like Mastercoin, Counter Party, Coloured Coins, Namecoin began to surface in realisation of Nick Szabo’s idea The Smart Contract.

It wouldn’t be until 2014 when Ethereum was announced, by replacing bitcoin script with a virtual machine, you could now have very general purpose smart contracts - A programmable public blockchain. But the 2014 vision of Ethereum was widely different than it is today, it promised to be a decentralised technology stack that would, like Freenet & GNUNet, offer private p2p messaging (Whisper) and a distributed data store (Swarm).

For me, it was the perfect synthesis of all these ideas, this was the Cyberstate, the Virtual State, the Temporary Autonomous Zone, and I began to contribute to the project anyway I could, hanging out with Jeff in Amsterdam to work on geth and setting up the Amsterdam meetup, contributing to the Peer Discovery protocol, Vyper and many discussions.

I ultimately ported EthereumJ to Android and my project which was an unreleased SPV client called Coinhero was renamed to Syng, named after the Philip J Syng inkstand used to sign the Declaration of Independence and the US Constitution, the idea being that you could put that ink into the hands of everyone - a project that would later become Status, a WeChat alternative without the State-Surveillance Apparatus.

Something happened to Ethereum that I still haven’t gotten an answer for, after the split of founders like Gavin Woods, Charles Hoskinson, Vitalik Buterin - and perhaps due to the constraints of the amount of funds raised - Ethereum became largely focused on the Blockchain component.

Status was pretty much the only project using Whisper and so we ultimately took over the protocol and continued developing it under Waku. Status also relied on a light client protocol LES which became dormant, and Swarm at the time had project management issues.

We found ourselves incurring more platform risk and in an effort to build Status we found ourselves moving more into protocol design and infrastructure. We built out Nimbus as an Ethereum 1 & 2 client that has Fluffy that supports the new light client protocol.

Finding no solution for our requirements in decentralised file storage we’ve built out Codex.

And now, given the new attacks on our systems (like Tornado Cash), and projects like zcash , monero who use privacy to achieve the fungibility property of Sound Money aka political neutrality.

We have a reason to generalise these applications create a privacy focused (politically neutral) heterogenous programmable multichain client, Nomos.

Ethereum is widely successful and it continues to innovate, it has the best learning and research community, I appreciate it for what it is and what it has accomplished. Ethereum has showed us that, like BItcoin, you can create a more sophisticated economy offering financial services, in its early days it also showed the potential of a whole myriad of decentralised applications, granted most of them died off as Ethereum wasn’t able to support them.

Logos

However, Ethereum, as a unified decentralised technology stack didn’t materialise, the manifestation of a cyberstate - a virtual state, remains a unrealised latent cypherpunk dream.

And thats why Nomos, Waku & Codex exist, they are core primtives in a decentralised technology stack and they are intended to strengthen one another. They come together as Logos, that is designed to be a cyberstate / virtual state / network state.

With all three of these protocols, we have advanced them and done something novel, but they’re not as interesting alone as they are together - and then their application.

While we recognise each of these projects have independent timelines, the ideal state is that there is a unified privacy preserving communications layer (Waku), that storage (Codex) is done ontop of it, and that blockchain (Nomos) state is offloaded to storage and consensus is done on the comms layer.

A self-sovereign network that is intended to have real world corruption resistant, censorship resisnant, stable & transparent insitutions built ontop of them, that then provides competitve governenace, by filling gaps in governance in the real world, wherever the Net can reach.

Creating a decentralised technology stack for people to create censorship & corruption resistant decentralised applications is one thing, but its real world application is where it really shines, a new order which can govern our lives, provide stable, fair & just institutions to anyone connected to the Net.

In short we’re no longer pirating music and software, we’re ‘pirating’ insitutions and putting it into the hands of those who need it, to underserviced citizens.

Future

This post is getting a little long, I could write a lot more about about the why, suffice to say I’ve come to believe this technology, combined with a new ideology, is needed to save the West, our civilization has been intentionally and unintentionally eroded - we have seen the futile attempts to reform. Exit, the creation of parallel system is the most likely viable answer.

If such a system can be realised, it will represent a new political system - one that is more legitimate than modern liberal democracies based on their own terms.

It can make a stronger case for Popular Sovereignty (Consent of the Governed), it enables Explicit Consent for users. It can enable competitive governence and autocentric law. Parallel Societies can be made with it.

It can manifest an order that isn’t inherently physically coercive, and while physical security still needs to be worked out (although I’m still partial to a hoppean style production of private defense and friedman-like insurance coverage) can unlock tremendous value and be of application to developed countries, Favelas, Gated Communities, Neighbourhoods/Ethnic Enclaves, SEZs, Charter Cities, Autonomous Regions, Secessionists & Proto-States.

It’s also not farfetched - such a system is remiscient of the Jewish diaspora’s Qahals or Kehillas a non-territorial “State within a State”.

It may even have application as a model for world order. The United States DoD formally recognised Cyberspace as a 5th domain of conflict, and United Nations recognised nation-state claims of sovereignty over cyberspace. With this comes some clear definitions that allows us to draw territoriy in cyberspace.

For example if we can obfuscate the network traffic, the participant in such a system and the code or logic they are interacting with or deploying, then such a system would start challenging the sovereign claims over cyberspace - and if you can do it for one nation-state you can do it for all, making such a disintermediated medium useful for state and non-state actors to join binding agreements in the world society, while maintaining their sovereignty should they choose to leave such a system. Moreover such a system would be blind and impartial to the contents flowing through it, in which case a monetary policy that is political neutral may be suitable for a world reserve currency.

The realisation of Logos as a decentralised tech stack, intended for real world deployment of institutions to fill in gaps of governments has a tremendous amount of value. The World Bank issued a report named “Where Is the Wealth of Nations?” and found Rule of Law was by far the largest wealth generator of any country, even over natural resources.

Such an idea is truly awesome, and I hope you will build it with me.

Books

We have a whole Zotero library where I’ve compiled all the resources that have gone into this idea.

Some starting material may be:

  • The Pirate Organization
  • Crypto-Anarchy, Cyber-states & Pirate Utopias
  • Virtual States
  • Your Next Government
  • The State in the Third Millenium
  • The Stack: On Software and Sovereignty
  • Extrastatecraft: The Power of Infrastructure Space
  • Anarchy, State & Utopia
  • Nations by Consent
  • Machinery of Freedom
8 Likes

The end goal is about deploying insitutions to underserviced citizens, and it requires all three components to do that and to maintain the integrity of the system without succumbing to coercion. So my answer would be “Cohesion as a whole”.

Depends what you mean by success.
I disagree that recent years that the needs had been raised, they were there from the beginning, if not understood in earlier systems than before Ethereum. Swarm was always intended to be data availability layer for Ethereum - it was the original design, they had different execution times and not enough collaboration between the teams.

I disagree. For example Waku & Status’ “success” are intertwined, Waku is not going to be able to be “successful” if it cannot support applications like Status. If it can handle applications like Status then it can handle Nomos consensus.

You don’t define what “success” looks like for a project that’s “independent and decoupled”, such a project would end up ultimately having to support some application, and you tie the idea of interoperability to the only reason of leveraging market shared, as if there’s no other reason to do so.

You make this argument time and time again - and again, I disagree with you in the short term, and agree with you in the long term. The only reason Waku exists is because of Status, and if Waku does not support Status, then the only reason Waku exists is to support Logos, and if Waku does not support Logos projects, then why would any other project risk their business by adopting untested technology such as Waku?

8 Likes

Thank you for this @jarradh and the related call, it made me shift my stance of several of the matters discussed above.

To have strong cohesion between the projects, the line of communication needs to remain open. I understand this is why Vac was created in the first place. If all projects leverage Vac’s available resources, then Vac can become this forum where cross-pollination allows deeper integration.

Practices such as the Vac research calls are a great first step but I believe it could go further, with specs being pushed to Vac RFC, p2p/cryptography/etc teams being consulted for protocols within their domains I believe the tokenomics team is a good example of this, with them being involved across all BUs.

If Vac can act at this common point of awareness, they could also help with collaboration or at least identifying/directing on topic of collaboration. I am not exactly sure how it looks like, but would be happy to further brainstorm on this.

1 Like