logo
Select Language
Hindi
Bengali
Tamil
Telugu
Marathi
Gujarati
Kannada
Malayalam
Punjabi
Urdu
Oriya

‘Be the one in control’: Why are more countries leaning towards banning social media access for kids?

Source by : CNA Digital NEWS Hub

17 Mar 2026 06:00 AM

‘Be the one in control’: Why are more countries leaning towards banning social media access for kids?

When it comes to social media bans, governments are willing to absorb criticism on feasibility, privacy and civil liberties, experts tell CNA.

Social media apps on a person's phone. (File photo: iStock)

KUALA LUMPUR: As countries around the world mull social media bans for those under 16, tech firms have touted their own child safety features and warned of unintended effects in their bid to push back against more regulation.

But experts say that ship has sailed for Big Tech, noting that internet regulators seem more determined than ever to push through the bans despite what platforms say or do.

With strong public support for intervention against the threat of online harms especially against children, governments are willing to absorb criticism on feasibility, privacy and civil liberties, rather than be accused of inaction, the analysts add.

While tech firms can highlight how they are already protecting children on their respective platforms, the debate has moved beyond safety tools to whether they can demonstrate systemic and enforceable measures, the experts tell CNA.


“Social media platforms can (roll out features that may) reduce some regulatory pressure, but they are unlikely to reverse the current policy momentum through product tweaks alone,” said Galvin Lee, a marketing and economics lecturer at Taylor’s College in Malaysia.

The momentum in the region is palpable.

On Dec 10 last year, Australia became the first country to ban social media for children under 16, blocking access to platforms including TikTok, Alphabet's YouTube and Meta's Instagram and Facebook.

Since the landmark ban, regulators across Southeast Asia, Europe, in Brazil and in a handful of states in the United States are moving to study or emulate it.

These accounts will be private by default, and they will not be able to get new message requests from users they are not connected with. Teens aged 13 to 15 will need parental permission to change these settings.

Teen accounts are also prohibited from going live, and will automatically filter out content that could be considered nudity sent in messages.

Philip Chua, Meta’s APAC director of public policy for products, said there was a need to balance the potential benefits and harms of being connected on social media.

“The issue here with some of the ban proposals that we've been seeing around the world, including Australia, is that ultimately that results in a lot of unintended consequences,” he said.

These include migration to unregulated platforms, a surge in circumvention techniques like the use of virtual private networks (VPN), and the creation of a regulatory gap that does not reflect where teens actually spend time online, he added.

But Shafizan Mohamed, a communications lecturer at the International Islamic University Malaysia (IIUM), said regulators looked at “more than just talking or having features”.

“Even if Big Tech is being very serious in improving their safety features, coming up with new alternatives or initiatives, it would not make governments reconsider under-16 restrictions,” she told CNA.

“There is a bigger political momentum not just here in our part of the world, but also in Europe for example, where governments are shifting their positions from trusting platforms to enforcing regulations."

WHY REGULATORS AREN’T BUDGING
Like Meta, TikTok and Snapchat also build in child safety features on their platforms. TikTok’s family pairing feature lets parents set boundaries and customisable limits, while Snapchat applies safety and privacy settings by default for children.

But despite such initiatives, governments lean towards regulation as they have seen the impact of social media on children, concerns from parents and the larger issue of public trust, experts said.

"Governments can see that it is time they need to be the one in control; that it cannot be left to Big Tech to decide,” Shafizan said.

Australia’s ban has also shown that child safety politics can overpower platform lobbying, Lee from Taylor’s College said.

“Australia showed that once the issue is framed as a social protection question, governments may be willing to absorb criticism on feasibility, privacy, and even civil liberties if the public strongly supports intervention,” he said.

Shafinaz said regulators were increasingly prioritising a “precautionary approach” when it comes to the social media landscape.

“For example, I think even MCMC would rather be accused of over-regulating than failing to act while all of these harms continue,” she said, referring to the Malaysian Communications and Multimedia Commission.

In response to CNA’s question on whether Meta's teen accounts feature would be enough to make governments reconsider blanket bans, Chua said the company shares a “common purpose” with regulators.

“But there's definitely, I think, more conversations to be had about how you can pursue the common intent,” he said.

“The conversations that we have with regulators is to figure out how we can keep people safe online, not just in a small number of apps that are actually perhaps more invested in safety than unregulated or newer apps.”

Chua reiterated Meta’s calls for age verification to be introduced at the base level of app stores, saying that this would be more efficient than requiring age verification for each of the dozens of apps that teens use.

This means app stores would be required to verify a user's age before letting them download new apps.

Meta has shared with MCMC its child safety features and the unintended consequences of a ban, Chua said, calling it a “constructive working relationship”.

“My hope is those points are well registered,” he added.

CNA has reached out to MCMC for comment.

UNINTENDED CONSEQUENCES
The unintended consequences tech firms talk about to rebuff blanket bans are not without merit, but only up to a point, experts told CNA.

Lee said it is true that underage users can circumvent restrictions through borrowed identities, older siblings’ accounts, VPNs, or migration to less regulated online spaces.

He also pointed to how Australia’s ban excludes standalone messaging apps, online gaming, professional networking, education and health support services, creating “obvious edge cases”.

But Lee said regulators keep insisting on bans or hard minimum-age rules because they no longer see this as just an access problem, but an incentives problem.

“In their view, platforms have had years to improve teen safety and have not earned the presumption of self-regulation,” he said.

“A ban is blunt, but it creates a non-negotiable compliance duty and shifts the burden back onto firms that design and profit from these systems.”

Lee also noted that Australia’s framework includes continuing oversight and an independent review within two years.

“In other words, Australia has accepted that this is not a one-off announcement but an evolving regulatory programme,” he added.

Shafizan from IIUM said loopholes are "natural" in any newly implemented legislation, with regulators involved in a “learning process” to identify and close them.

"But I think most governments still see regulation as a stronger governance signal, rather than voluntary company safeguards,” she said.

Benjamin Loh, a senior lecturer in media and communication at Taylor’s University in Malaysia, however described unintended consequences as a “legitimate concern”, citing the example of how the US tried to ban alcohol in the 1920s during the Prohibition era.

Prohibition not only failed to eliminate alcohol consumption but also triggered a rapid rise in organised crime and dangerous illicit alcohol production.

“Social media has become quite ingrained in the lives of most young people and cutting it off cold turkey will likely make many resort to risky behaviours to circumvent it, hence why there needs to be nuance in the way the ban is enforced,” Loh said.

This means regulators should not only use identity or age verification in enforcing the ban but also monitor overall usage across all apps, he explained.

BEST WAY FORWARD
When tweaking ban policies, regulators could still be influenced by the “credibility of the package” behind tech firms’ child safety features, said Lee from Taylor’s College.

Platforms would need to show independently auditable age assurance, privacy-preserving enforcement, default high-safety settings for minors, faster intervention against grooming and bullying, and transparent data on outcomes, he said.

“So the obstacle is now less about technical feasibility and more about trust, accountability, and whether regulators believe platforms are moving fast enough without legal compulsion,” he added.

But Loh warned that the Cambridge Analytica scandal has made platform owners “far more defensive” in how they interact with regulators, especially since most regulators tend to approach platforms individually rather than collectively.

In 2018, it was revealed that British consulting firm Cambridge Analytica collected personal data belonging to millions of Facebook users without their consent, mainly to be used for political advertising.

Before the scandal, regulators often deferred to platform owners’ judgment to self-regulate, accepting the narrative that they lacked the knowledge to properly understand and regulate social media, Loh said.

But the scandal made regulators realise that Big Tech could not be trusted, which in turn led to constant pressure on the industry to do more and tech firms “fighting back at every turn”, he said.

“Platform owners are also more careful in ensuring that global-level regulations like the EU’s GDPR are not so easily created or at the very least are only produced with their direct input and influence, which we can see with artificial intelligence regulation,” Loh said.

The European Union’s General Data Protection Regulation is a legal framework that imposes data privacy obligations on organisations anywhere, as long as they target or collect data related to people in the EU.

While governments can set non-negotiable child safety standards, platforms should also be able to retain some flexibility in how they meet them in a co-regulation model, Shafizan said.

She highlighted that online child safety cannot just depend on strict legislation, but also a more “comprehensive movement” involving digital literacy, awareness and community support.

“I think the most sustainable policy, therefore, is not just one that makes platforms materially safer by design, but also gives governments real enforcement tools to allow room to refine these rules as evidence emerges,” said Shafizan.

Devashish Govind Tokekar
VANDE Bharat live tv news Nagpur
Editor/Reporter/Journalist
RNI:- MPBIL/25/A1465
Indian Council of press,Nagpur
Journalist Cell
All India Media Association
Nagpur District President
Delhi Crime Press
RNI NO : DELHIN/2005/15378
AD.Associate /Reporter
Contact no.
9422428110/9146095536
Head office:- plot no 18/19, flat no. 201,Harmony emporise, Payal -pallavi society new Manish Nagar somalwada nagpur - 440015

1
0 views

Comment