Why social media bans won’t save our kids
Vendan Ananda Kumararajah
- Published
- Opinion & Analysis

Politicians are rushing to block under-16s from social platforms, but the danger runs much deeper than screen time or teenage scrolling, warns Vendan Ananda Kumararajah. The real threat lies in systems built for profit, not childhood, and only a redesign of the platforms themselves will make the online world genuinely safe for young people
Australia’s new under-16 social media restrictions came into force this week, and new UK research shows that more than half of British adults would support a similar ban. The instinct is understandable. Parents feel outmatched by opaque platforms that shape their children’s lives, while trust in tech firms is eroding rapidly. Only 13% of UK adults say they trust social media platforms with biometric data — a strikingly low figure in an era where age-estimation technology is becoming central to online safety laws.
But beneath the headlines lies a deeper systemic dilemma. Children increasingly need to be online to learn, socialise, and participate in digital culture. At the same time, parents perceive that the environments they inhabit have been engineered for commercial optimisation rather than developmental wellbeing. This is not a contradiction that can be solved through bans alone. It is a symptom of a governance architecture that has failed to evolve.
The modern social platform is a machine optimised for attention, growth, and data extraction. This is not a moral failing of individual companies; it is a structural feature of the system they operate within. The incentives that drive platform behaviour do not naturally align with the safety, dignity, or developmental integrity of young users.
This is why even well-intentioned regulations often fall short. Age-verification requirements are expanding globally, yet public trust remains low. Most adults support age checks in principle, but only when the mechanisms are privacy-preserving and transparent. Without such guarantees, age assurance becomes another point of anxiety in an already polarised debate.
What parents are expressing, often without the vocabulary for it, is a profound discomfort with systems that lack ethical accountability. They want more than content filters or parental controls. They want to know that the deeper machinery of the digital world recognises childhood as something uniquely worth protecting.
If society reaches the point where banning under-16s from social media appears to be the most viable option, it reveals something important: namely, that we have not built digital environments fit for children in the first place. A ban may create breathing room, but it avoids the harder question of what it would take for online spaces to be inherently safe for the young rather than selectively restrictive. Surface fixes such as content moderation, reporting tools, or even biometric age checks operate at the periphery of the system. The real issues are structural, and structural problems require architectural solutions.

To move beyond bans and surface interventions, social platforms must adopt a governance architecture that examines how decisions are made, how incentives shape behaviour, and how ethical constraints are embedded throughout the system. This requires three interlocking layers.
- Ethical Viability — ensuring systems remain aligned with child wellbeing.
Before deployment, platforms should be required to demonstrate ethical viability: whether the system’s design, algorithms, and data flows support or undermine children’s developmental needs. This involves evaluating how recommendation systems handle vulnerability, identifying algorithmic pathways that push minors toward harmful content, and stress-testing whether platform design encourages agency or dependence. This is not a moral appeal; it is a governance expectation. Just as aviation systems must prove safety before flight, digital platforms should prove ethical fitness before use. - Distortion Tracking — monitoring algorithmic drift in real time.
Even well-designed systems degrade when incentives distort their behaviour. A governance architecture must therefore track systemic drift continuously, observing when algorithms shift toward addictive engagement patterns, when content ecosystems polarise or sensationalise, when commercial pressures begin to override safety constraints, and when subtle forms of exploitation emerge through feedback loops. The problem today is that platforms tend to intervene only after public outcry. Architectural governance demands that distortion be detected and corrected before harm scales. - Legitimate Agency — enabling age-appropriate participation.
The final layer focuses on creating ways for young users to participate safely and meaningfully, with boundaries that align with their developmental stage. This includes designing curated digital ecosystems for younger users, developing transparent and privacy-first on-device age estimation, providing mechanisms that give guardians clarity without invading autonomy, establishing accountability trails regulators can audit, and creating pathways that expand as children mature. Agency is not exposure; it is the capacity to navigate environments built with developmental integrity in mind.
This architectural model moves safety from reaction to design. Instead of relying on bans, moderation after harm, PR-driven fixes, or crisis response, platforms would operate on ethical design principles, real-time drift correction, transparent and enforceable governance, and environments shaped around how children grow. This approach operationalises child safety at the level where harm originates: in system behaviour, not user behaviour.
The model dissolves the central contradiction between the need to protect children and the need for them to be online to learn and develop. With ethical viability, distortion tracking, and legitimate agency in place, children can participate without being thrust into adult-tier environments. Platforms can verify age without harvesting biometrics. Regulators gain accountability without invasive data demands. Parents regain trust because the system itself enforces boundaries. Platforms continue to innovate and grow through differentiated, safe ecosystems. The conversation shifts from control to capability — from excluding children to designing a digital world worthy of them.
New research shows that public support for facial age estimation triples when images never leave the device. This reveals something essential: the public is not rejecting age assurance; they are rejecting systems that treat safety as synonymous with surveillance. Architectural governance makes privacy-preserving age assurance one part of a broader ethical system rather than a lone tool or political slogan.
The debate about under-16 bans reflects a public searching for certainty in systems that currently offer very little of it. The real opportunity lies in redesigning the architectures of platforms and governance so that safety is structural, privacy is respected, agency is nurtured, trust is possible, and childhood is not collateral damage in the race for engagement. The question is not whether children should be online. They already are, and they will be. The question is whether we can build digital systems worthy of their presence.

Vendan Ananda Kumararajah is an internationally recognised transformation architect and systems thinker. The originator of the A3 Model—a new-order cybernetic framework uniting ethics, distortion awareness, and agency in AI and governance—he bridges ancient Tamil philosophy with contemporary systems science. A Member of the Chartered Management Institute and author of Navigating Complexity and System Challenges: Foundations for the A3 Model (2025), Vendan is redefining how intelligence, governance, and ethics interconnect in an age of autonomous technologies.
READ MORE: ‘Facebook’s job ads ruling opens a new era of accountability for artificial intelligence‘. France’s equality watchdog has ruled that Facebook’s job-advertising algorithm engaged in indirect gender discrimination, a finding that for the first time treats bias in code as a breach of equality law. Here, Vendan Kumararajah examines a precedent that extends accountability from human decision-makers to machine systems — a shift with major implications for recruitment, healthcare, and finance, where algorithms now shape access to work, credit, and care.
Do you have news to share or expertise to contribute? The European welcomes insights from business leaders and sector specialists. Get in touch with our editorial team to find out more.
Main image: Kampus Production/Pexels
Sign up to The European Newsletter
RECENT ARTICLES
-
Japan’s heavy metal-loving Prime Minister is redefining what power looks like -
Why every system fails without a moral baseline -
The many lives of Professor Michael Atar -
Britain is finally having its nuclear moment - and it’s about time -
Forget ‘quality time’ — this is what children will actually remember -
Shelf-made men: why publishing still favours the well-connected -
European investors with $4tn AUM set their sights on disrupting America’s tech dominance -
Rachel Reeves’ budget was sold as 'fair' — but disabled people will pay the price -
Billionaires are seizing control of human lifespan...and no one is regulating them -
Africa’s overlooked advantage — and the funding gap that’s holding it back -
Will the EU’s new policy slow down the flow of cheap Chinese parcels? -
Why trust in everyday organisations is collapsing — and what can fix it -
In defence of a consumer-led economy -
Why the $5B Trump–BBC fallout is the reckoning the British media has been dodging -
WPSL Group unveils £1billion blueprint to build a global golf ‘super-group’ -
Facebook’s job ads ruling opens a new era of accountability for artificial intelligence -
Robots can’t care — and believing they can will break our health system -
The politics of taxation — and the price we’ll pay for it -
Italy’s nuclear return marks a victory for reason over fear -
The Mamdani experiment: can socialism really work in New York? -
Drowning in silence: why celebrity inaction can cost lives -
The lost frontier: how America mislaid its moral compass -
Why the pursuit of fair taxation makes us poorer -
In turbulent waters, trust is democracy’s anchor -
The dodo delusion: why Colossal’s ‘de-extinction’ claims don’t fly


























