Policy written by platforms

Insights • Zekret Labs

When Policy Is Written by Platforms

By Kayne Brennan • 17 Feb 2025

#digital-policy#platform-power#identity-governance#privacy-by-design#public-interest

Digital identity policy rarely emerges in a vacuum.

In most places, laws on online safety, identity verification, and access controls are shaped under pressure. Governments respond to real concerns: protecting minors, preventing fraud, tackling crime, and sustaining trust in digital systems. These goals are legitimate.

But the policy-making environment regularly impacts the goals themselves, influencing results from the outset.

As a result, many digital identity frameworks are steered by large tech platforms, data brokers, and infrastructure providers—those best equipped to offer technical advice and swift implementation. Policymakers, often under time pressure, come to rely on their expertise.

This outcome aligns policies with dominant platforms' interests and architectures, commonly sacrificing the wider public's needs as well as reinforcing visible, data-driven business models.

In this environment, policy frequently reflects existing business models: identity at scale, compliance as data collection, and enforcement through visibility.

Amid these circumstances, it is rarely questioned if prioritizing platform-driven approaches ultimately compromises the public's long-term interests.

For large platforms, identity data is beyond a compliance burden; it is an asset. It enables profiling, reduces anonymity, strengthens lock-in, and creates leverage in regulatory conversations. Systems that require persistent identity make platforms easier to govern from the top down but harder for individuals to leave.

This pattern helps explain why certain solutions keep resurfacing. Mandatory identity checks. Centralised verification services. Proposals to restrict or ban privacy tools, including VPNs. Each is framed as necessary for safety or enforcement, yet it also reduces individual autonomy while increasing institutional control.

These outcomes may not be deliberate, but they are rarely accidental.

The least-heard voices in these policy talks are often those most affected: ordinary users, young people, marginalised groups, and those at greatest risk when identity systems fail. Their risks are abstract in policy documents but real in daily life.

When policy is formed by those who benefit from data access, the outcome is greater collection and centralisation. Alternatives focused on minimisation or user control are often dismissed as impractical, even when they could serve stated goals.

This creates laws that mandate risk rather than reduce it. Compliance is about disclosure. Privacy becomes a hurdle rather than a core design principle.

A good digital identity policy recognizes that unrestrained platform power shapes data use. These systems must be designed for failure, and privacy treated as a core requirement—not a concession.

If governments want identity systems that earn public trust, they must broaden who they listen to and take action. Governments should actively seek input from technologists focused on harm reduction, not just scale. They must directly engage affected communities, establish accessible feedback channels, and transparently justify chosen solutions. Policy makers also need to challenge approaches that prioritize platform convenience over people's protection. Only by making these concrete steps can governments build digital identity systems that truly serve citizens.

Policies written for platforms may be easier to deploy. However, governments should commit to focusing on policies built for citizens, even if this requires more effort and deliberation.

Proactively designing broad and protective digital identity policies is essential. Governments must commit to implementing lasting solutions that prioritize public interest, safeguard privacy, and minimize risks. The time to act is here.