There is a simple question at the heart of most online safety debates: How do we know someone is old enough?
The answer should be simple, yet current methods make it too invasive.
Across the internet today, proving age increasingly means revealing identity. Users are asked to upload passports, government-issued ID cards, selfies, or biometric scans to access age-restricted content, join competitions, or participate in online communities. In some cases, this applies to adults. In others, it applies directly or indirectly to children.
This is a category error.
Age is an attribute, not an identity. Being over a certain age should not require disclosure of personal details. Yet many systems default to full identity exposure just to prove one fact.
The reason this happens is not that it is the safest approach, but because it is the easiest to implement within existing structures. Identity documents are familiar. Verification vendors already exist. Platforms know how to store and process this data. From an institutional perspective, it feels reliable.
For individuals, it creates risk.
Identity documents and biometric data are permanent. Upon sharing, they cannot be taken back. They persist long after the verification moment has passed, often stored by third parties with unclear retention policies and limited accountability. A system designed to answer a simple yes-or-no question ends up creating a long-term record that far exceeds its original purpose.
This is particularly troubling when applied to young people. A child proving their age today should not be producing a digital artefact that follows them into adulthood. They should not be normalised into a system where exposure is the price of participation, or where privacy is framed as something to be sacrificed for safety.
There is also a trust cost. When people are repeatedly asked to reveal far more than is necessary, they begin to resist. Some disengage. Others look for workarounds. VPNs, fake documents, shared accounts, and avoidance behaviours become common. The system appears compliant on the surface, but its legitimacy erodes underneath it all.
This is not a failure of users. It is a failure of design.
Proving age does not require knowing a name, storing a face, or creating central databases of personal information. Instead, it needs a shift toward proof-first, not identity-first, solutions.
A well-designed age verification system should answer one question and forget the rest. It should minimise data by default, limit exposure by design, and reduce the consequences of failure. It should protect the individual even when institutions or platforms make mistakes.
If we continue to treat identity disclosure as the only path to safety, we will keep creating systems that are overreaching, brittle, and ultimately untrusted. If we learn to separate attributes from identity, we can build protections that work without asking people to give up more than is necessary.
Protection that requires exposure is not protection.
