Few things command faster agreement than the need to protect children online. Rightly so. Young people should not see adult content or interact with unknown adults in unmoderated spaces. Grooming, exploitation, and abuse demand serious attention.
Where things begin to go wrong is not in the goal, but in the solution.
Governments and platforms across the world are responding to these worries by demanding more identity data. Age verification now often means uploading passports, government ID cards, selfies, or biometric scans. Sometimes, children are asked to provide this information themselves. In other cases, parents submit it on their behalf.
This approach assumes that protection requires collecting more personal data from young people. That is a deeply flawed assumption.
Proving someone is over a certain age is not the same as revealing who they are. Yet most current systems combine these two ideas. Instead of verifying an attribute, like "over 13" or "over 18," they require full identity disclosure. Children and teenagers are being pushed into permanent identity exposure long before they understand the consequences.
Biometric data and government ID documents are not like passwords. You cannot change them if they are compromised. You cannot revoke them once they are leaked. A facial scan taken today may still be usable decades from now. This remains true even after the platform changes ownership, is breached, or is shut down.
When these systems fail, the cost falls to the individual—not the institution. For children, it falls on those unable to give meaningful consent.
This is not a hypothetical risk. History has shown that large identity datasets become targets. They can be breached, copied, sold, and misused. The bigger and more centralised the system, the greater the incentive to attack it. Asking for more data does not make people safer. It increases the blast radius when something goes wrong.
There is also a quieter harm. Normalising identity disclosure as the price of participation teaches young people that privacy is optional and exposure inevitable. It conditions them to see surveillance as safety and data extraction as care. That lesson does not disappear at eighteen.
None of this means we should abandon age protection or online safety. It means we should take those goals seriously enough to design systems that do not create new risks in the process.
A child should be able to prove they are old enough to access something without revealing their name, face, or identity. Platforms should use privacy-preserving age checks to ensure compliance without storing sensitive personal data. Governments should set standards that focus on verifying age rather than collecting identity information, so citizens are not forced into unnecessary digital exposure.
Protecting children online is not about collecting more data. It is about reducing risk. Achieving this requires two key shifts: first, replacing identity-based control with privacy-preserving proof; second, transitioning from convenience-driven policy to responsibility-driven design. Let us commit to making these changes now to truly safeguard children online.
Let us not ask young people to pay for our shortcuts with their future privacy. Stand up now for privacy-conscious solutions that genuinely safeguard the next generation.
