The Different Stages of Privacy: A Taxonomy

Guy Zyskind, Founder of Fhenix
07 November 2025

Over the last few years, the Ethereum community did the hard thinking on scaling, leading to a clear taxonomy. We argued about optimistic vs zk rollups; we argued about zkEVM flavors; and eventually we got rollup “stages” and zkEVM “types”. That vocabulary didn’t just describe progress, it caused it. Teams could say “we’re Stage 1 now, aiming for Stage 2 by Q4,” and everyone understood what that meant.

As scaling gets good enough to stop being the bottleneck, the next question is obvious: privacy. If rollup stages tell you how secure and decentralized execution is, we need an equally crisp language for how well a chain keeps data private. There are already dozens of projects building privacy tech, and likely hundreds who are implementing these solutions — without a clear understanding of the underlying security tradeoffs. We need a clear taxonomy.

This post proposes a simple ladder of privacy stages. It borrows the spirit of the rollup stages—objective, testable milestones. It’s important to state that there are many dimensions in which privacy could be evaluated, and so here we focus solely on who can decrypt the data. This also creates an analogy to rollup stages, which effectively ask the question — who can steal your funds?

Global privacy (a.k.a. “private shared state”)

First, what are we even grading?

  • Global privacy means the shared state of the chain: balances, contract storage, app data are private by default. It remains encrypted at rest and in use, and the protocol still proves that state updates are correct.
  • In a globally private system, no single party can read everything — or anything they’re not authorized to see — yet the system can still compute on that private data. For example, you can run a sealed-bid auction where bids remain private but are compared and settled correctly; or compute aggregate risk scores, matching, and fraud checks—all without revealing the underlying plaintext. That’s the point: privacy without losing shared, programmable utility.

What we’re not grading here:

  • Local privacy (client-side ZK) protects a user’s private inputs but does not make the shared state private. It’s useful in places (e.g., Payy, Railgun, Privacy Pools), but programmability is limited: you can’t easily ask global questions like “how many fraud cases happened chain-wide?” or “who won the sealed-bid auction?”. That pushes logic into partial, per-user computations, makes apps harder to build and reason about, and doesn’t support rich, multi-party workflows over a private common state. (Notably, projects that started “local-only,” like Aztec or Worldcoin, have since moved to add global privacy via extra cryptography such as MPC.)
  • “Privacy by operator.” If one company/entity is trusted to see all plaintext data and promises not to leak it, that’s an operational policy, not a chain property.

Minimal glossary (skip if you’re familiar)

  • TEE (Trusted Execution Environment): a secure “box” inside a CPU. Fast, nice UX. If the box or its vendor breaks, privacy breaks. Effective privacy threshold = 1.
  • FHE (Fully Homomorphic Encryption): compute directly on encrypted data. (Our focus at Fhenix.)
  • MPC (Multi-Party Computation): multiple machines compute on secret-shared data so no one sees the whole secret.
    • In practice, threshold decryption is used with FHE too: the decryption key is secret-shared across operators/validators so no single party can decrypt.
  • Blocking quorum: a small, diverse, non-custodial group of operators that can halt decryption or state advancement (great for privacy; can hurt liveness if abused).

Defining security for privacy

Privacy security is a T-out-of-N guarantee.

  • T = minimum number of operators whose collusion compromises privacy.
  • N = total number of operators holding shares/authority.

TEEs

  • Threshold: effectively T = 1, regardless of N. Any one operator with a compromised/misconfigured enclave (or vendor/firmware failure) can leak plaintext.
  • Tradeoff: Larger N can still help correctness/censorship-resistance (more parties running checks), but privacy risk increases with each additional enclave you must trust not to fail.

MPC and (threshold) FHE

  • Threshold: Configure T via secret sharing. Privacy holds unless T or more collude (MPC: data shares; FHE: key shares).
  • Privacy vs liveness:
    • Maximal privacy: T = N → any single “bad” or offline node can halt the system (also known as a liveness failure)
    • Lower T: improves liveness (easier to make progress) but reduces privacy (fewer colluders needed to decrypt).
  • Blocking quorum: With a T-out-of-N design, N − T + 1 parties are sufficient to block decryption or state advancement (as defined above).

Notes: Real systems impose constraints on feasible T relative to N (performance, network churn, cryptographic limitations). Those details vary by implementation; the core intuition is the T-out-of-N lens and how it governs both who can decrypt and who can block.

The stages of privacy

We can now move forward and define the stages of chain-privacy. As mentioned, similar to rollup stages, which defines how secure a rollup is against asset theft, here we define how secure a chain is against data theft. Note that unlike rollups, most privacy solutions, including Fhenix’s CoFHE are integrated into an existing chain, so it does not matter whether that chain is a L1 or a L2.

Stage 0: TEE-only privacy (“trust the box”)

  • What it is: Global state is decrypted inside TEEs; the world sees ciphertext.
  • Upside: Great performance and developer ergonomics.
  • Downside: One vendor/firmware/side-channel failure can reveal everything (T=1). Failures go undetected and each year new vulnerabilities are found. Even when detected, mitigation is often limited, and could take months to resolve, during which the system remains vulnerable.
  • Verdict: Good for PoCs and an ever-decreasing number of use-cases, such as really large open-source LLMs (e.g.: Secret Network’s SecretAI). Always going to be useful as a defense-in-depth type of solution, but insufficient as a singular privacy-preserving mechanism.

Stage 1 — Pure crypto compute (FHE/MPC) with training-wheels

  • What it is: Data stays encrypted-in-use via FHE and/or MPC. We have T-of-N security, where T can be arbitrary, not just T=1, but there’s no blocking quorum, and other hardening mechanisms might be lacking as well.

In simple terms, this means that at least T operators cannot be assumed to be non-colluding, which means they can keep the chain “alive” while exfiltrating data. For example, they may be run by the project directly or its affiliates (training wheels), or they might be pseudonymous with no publicly-available proof that they are independent entities.- TODO: step 2 will have a cryptoeconomic mechanism or lottery, etc..

  • Upside: Have all the needed cryptographic magic ingredients needed to add global privacy to a chain, without relying on a TEE. DevEx is already set and all further improvements are mostly invisible to developers and users.
  • Downside: Harder to guarantee that data is truly protected. For example, if you have N=10 and T=7, but 8 operators are run by the project’s team, then the team can still leak all data if they choose.
  • Verdict: Required as a stepping stone, but users should realize that until stage 2 is reached there is still a lot of trust involved.

Stage 2 — Blocking quorum + defense-in-depth

  • What it is: Cryptographic solution (FHE/MPC) as the main layer of protection (stage 1) plus a blocking quorum at the very least. We provide a list of additional recommended defense-in-depth ideas and guidelines for stage 2 below.
    • Distributed Key Generation and Setup — any secrets produced in a trusted setup are generated via an MPC protocol instead
  • Upside: Unlikely (though still theoretically possible) that any privacy leakage occurs, unless there’s a bug an outside attacker can exploit.
  • Downside: Minimal, but as usual with more decentralization, governance and operations are somewhat harder harder.
  • Verdict: This should be the practical gold standard for privacy

Stage א (Aleph/Infinity) — Indistinguishable obfuscation (iO)

  • What it is: A theoretical end-state where the program itself is the vault. Instead of thresholdizing keys or secret-sharing data, an obfuscated state-transition (the “privacy VM”) can run anywhere while revealing nothing beyond is required. In principle, no T-out-of-N is required: privacy stems from iO, not from who holds key shares.
  • Upside: Eliminates key-management and quorum complexity (T=infinity). No one can ever leak data.
  • Downside: It’s also stage “Infinity” for a reason — too theoretical today, at least for global (programmable) privacy. Constructions are fragile, cryptographic assumptions are heavy, performance wildly impractical, and applied security is unsettled. We may never get there, or we may get there with other constraints and only for a limited number of use cases.
  • Verdict: A north star, not a roadmap. If robust iO ever becomes practical, global privacy could be native without thresholds; until then, the real world lives in stage 2 (with additional hardening).

Optional Hardening for Stage 2

There are many ways to harden stage 2 privacy. We suggest a few ideas and guidelines here. Note that these could also apply to stage 1, as a building block towards stage 2.

  • Run each operator inside a TEE with a reproducible build. While TEEs are insufficient as a sole privacy solution (stage 0), they are great as defense-in-depth mechanisms, as they make collusion more difficult. Ideally more than a single TEE vendor is used (Intel TDX, AMD SEV, AWS Nitro, etc..).
  • Distributed trusted setup ceremony: All shared secrets and public parameters are produced through a distributed key generation process that reduces any initial trust.
  • Permissionless and economic security for operators: stage 2 requires that a blocking quorum (at least N - T + 1 operators) are publicly expected to be non-colluding. The easiest way to achieve this is by having multiple reputable and mutually unaffiliated organizations run at least this number of operators. This, of course, may present a challenge with respect to liveness and censorship resistance.A different, albeit arguably more complicated measure is to have operators selected randomly from a large pool of pseudonymous operators, in a way that minimizes the likelihood of selecting colluding operators. Combining this with stake and slashing conditions (note: this is more challenging to achieve against privacy failures) could approximate permissioned reputation in a permissionless environment. These ideas are not new and date back to the early days of Ethereum (see my Masters thesis from 2016 as an example).

Conclusion

Privacy needs its own “stages” the way scaling did. If rollup stages ask who can steal your funds, privacy stages ask who can decrypt your data. With that lens:

  • Stage 0 (TEE-only) are not enough. They can be used to harden cryptographic solutions, not vice-versa.
  • Stage 1 (pure crypto, training wheels) proves feasibility but still leans on trust.
  • Stage 2 (blocking quorum + defense-in-depth) is the goal: cryptography as the main line of defense, enough diversity between operators so they are unlikely to collude + running in TEEs for defense-in-depth.
  • Stage א (iO) is the asymptote—great north star, but we might never get there.

The privacy landscape is currently a mess. Most people do not understand what kind of privacy guarantees they can expect to get. Our goal in this post was to set a simple standard for global, programmable privacy that all projects can follow, and users can expect.

BUILD WITH FHENIX

Confidential Computing for

the Next Wave of DeFi

Join developers and protocols building the next generation of

onchain applications — powered by encrypted execution.