In recent years, some of the most thoughtful work on digital identity has not come from governments or large technology platforms, but from open research communities and blockchain foundations.
People often misunderstand or ignore this work, regarding it as theoretical or too far removed from realistic constraints. But if we take it seriously, it offers something missing from many identity debates: the recognition that trust, privacy, and scalability are not competing goals but architectural ones.
Multiple foundation-led papers, working groups, and proposals consistently promote a clear theme. For example, "minimise data by default" means identity systems should collect only the minimum necessary personal data. "Separate proof from disclosure" means verifying evidence without revealing underlying personal information. Individuals must be able to participate without creating permanent, centralised records that document their identity and activities.
These ideas are not radical. They are a response to the failures we have already seen.
Foundations that have spent years studying decentralised systems understand something that policy conversations often overlook: scale changes everything. A design choice that seems harmless in a small system can become dangerous when applied to millions or billions of people. Identity systems, in particular, amplify harm when they are built around aggregation rather than restraint.
This is why many foundation-led approaches emphasise credentials (verifiable statements about a user, such as age or membership) over accounts, claims (assertions made by the user or another party) over profiles, and verification (the process of confirming a claim or credential's validity) over storage. The goal is not anonymity for its own sake, but proportionality—meaning that only the necessary information is revealed in a particular context.
What stands out is how closely this thinking corresponds to regulatory principles on paper. Data minimisation involves collecting only the essential data for a specific purpose. Purpose limitation means using data solely for the stated reason it was collected. Proportionality requires data processing to be limited to what is appropriate and necessary. User control describes giving individuals the ability to manage their own data. These concepts frequently appear in legislation, yet developers rarely incorporate them into the systems that enforce the laws. The disconnect does not stem from philosophy, but from practicality.
Foundations operate without the same commercial pressures as platforms. They are not incentivised to collect data, monetise behaviour, or lock users into proprietary systems. This allows them to explore architectures that prioritise long-term safety and resilience, even when those approaches are harder to deploy in the short term.
Critically, this work does not reject regulation or institutional involvement. It recognises that identity systems must operate within legal frameworks and support real-world compliance. But it disputes the assumption that compliance requires centralisation. It also questions the belief that trust must be earned through visibility.
The future of digital identity will not be defined by ideology, but by outcomes. Systems that reduce harm, limit exposure, and earn public trust will endure. Systems that concentrate risk and demand permanent disclosure will face resistance, workarounds, and failure.
Leading blockchain foundations are right to insist that identity must be rethought at the architectural level. They are right to question models that depend on mass data collection. They are also right to argue that trust cannot be legislated into existence if systems undermine it by design.
The challenge now is to prove that we can deliver this vision responsibly, compliantly, and at scale, rather than continuing to debate whether it is desirable.
That is where the next phase of this conversation must focus.
