Australia’s online watchdog has criticised the world’s biggest social platforms of failing to properly enforce the country’s ban on under-16s using their platforms, despite legislation that came into force in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including allowing banned users to repeatedly attempt age verification and inadequate safeguards to prevent new accounts. In its first compliance report since the prohibition came into force, the regulator found numerous deficiencies and has now moved from monitoring to active enforcement, warning that platforms must demonstrate they have implemented “appropriate systems and processes” to stop under-16s from using their services.
Compliance Failures Exposed in First Major Review
Australia’s eSafety Commissioner has documented a concerning pattern of failure to comply amongst the world’s most prominent social media platforms in her inaugural review since the ban came into effect on 10 December. The report reveals that Meta, Snap, TikTok, YouTube and Snapchat have collectively neglected to establish adequate safeguards to stop minors from accessing their services. Julie Inman Grant expressed particular concern about systemic weaknesses in age verification processes, highlighting that some platforms have permitted children who initially declared themselves under 16 to subsequently claim they were older, effectively circumventing the law’s intent.
The findings represent a notable intensification in the regulatory action, with the eSafety Commissioner transitioning from monitoring towards direct enforcement. The regulator has stressed that merely demonstrating some children still maintain accounts is insufficient; platforms must rather furnish substantive proof that they have put in place comprehensive systems and procedures designed to prevent under-16s from creating accounts in the first place. This shift reflects the government’s determination to hold tech giants accountable, with potential penalties looming for companies that do not meet the statutory obligations.
- Permitting formerly prohibited users to confirm again their age and regain account access
- Enabling multiple tries at the identical verification process with no repercussions
- Inadequate safeguards to stop new under-16 accounts from being established
- Limited complaint mechanisms for families and the wider community
- Absence of transparent data about enforcement efforts and account deletions
The Extent of the Issue
The substantial scale of social media usage amongst young Australians underscores the compliance challenge confronting both the authorities and the platforms in question. With millions of accounts already removed or restricted since the ban’s implementation, the figures provide evidence of widespread initial non-compliance. The eSafety Commissioner’s conclusions suggest that the technical and procedural obstacles to enforcing age restrictions have turned out to be considerably more complex than anticipated, with platforms having difficulty to differentiate authentic age confirmations from false claims. This intricacy has placed enforcement authorities grappling with the fundamental question of whether current age verification technologies are sufficient for the purpose.
Beyond the technical obstacles lies a wider issue about the readiness of companies to place compliance ahead of user growth. Social media companies have consistently opposed stringent age verification measures, citing data protection worries and the genuine difficulty of verifying age digitally. However, the Commissioner’s report suggests that some platforms may not be making adequate commitment to implement the systems mandated legally. The move to active enforcement represents a critical juncture: either platforms will substantially upgrade their compliance infrastructure, or they stand to incur significant penalties that could transform their operations in Australia and possibly affect regulatory approaches internationally.
What the Numbers Reveal
In the opening month subsequent to the ban’s introduction, Australian authorities stated that 4.7 million accounts had been restricted or taken down. Whilst this statistic initially appeared to show regulatory success, further investigation reveals a more nuanced picture. The considerable quantity of account deletions indicates that many under-16s had been able to set up accounts in the initial stages, indicating that protective safeguards were inadequate. Additionally, the data prompts inquiry about whether deleted profiles represent real regulation or simply users deleting their profiles willingly in in light of the latest limitations.
The minimal transparency regarding these figures has troubled independent observers attempting to evaluate the ban’s true effectiveness. Platforms have revealed minimal information about their implementation approaches, performance indicators, or the nature of deleted profiles. This absence of transparency makes it hard for regulators and the wider public to determine whether the ban is working as intended or whether younger users are simply finding different means to access social media. The Commissioner’s push for detailed evidence of systematic compliance measures reflects growing frustration with platforms’ resistance to disclosing comprehensive data.
Sector Reaction and Pushback
The social media giants have addressed the regulatory enforcement measures with a combination of assurances of compliance and scepticism about the practical feasibility of the ban. Meta, which operates Facebook and Instagram, stressed its dedication to adhering to Australian law whilst at the same time contending that precise age verification remains a significant industry-wide challenge. The company has advocated for a alternative strategy, suggesting that strong age verification systems and parental consent requirements implemented at the application store level would be more effective than platform-level enforcement. This position reflects broader industry concerns that the current regulatory framework places an unrealistic burden on separate platforms.
Snap, the developer of Snapchat, has taken a more proactive public stance, announcing that it had suspended 450,000 accounts following the ban’s implementation and claiming to continue locking more daily. However, industry observers dispute whether such figures reflect authentic adherence or simply represent reactive account management. The core conflict between platforms’ business models—which historically relied on maximising user engagement and growth—and the regulatory requirement to systematically remove an entire age demographic persists unaddressed. Companies have consistently opposed rigorous age verification methods, pointing to privacy concerns and technical limitations, creating a standoff between regulators and platforms over who bears responsibility for execution.
- Meta maintains age verification ought to take place at app store level instead of on individual platforms
- Snap states to have locked 450,000 accounts following the ban’s implementation in December
- Industry groups point to privacy issues and technical challenges as impediments to effective age verification
- Platforms maintain they are doing their best whilst challenging the ban’s overall effectiveness
Larger Questions Regarding the Ban’s Impact
As Australia’s under-16 online platform ban moves into its implementation stage, key concerns persist about whether the law will accomplish its intended goals or merely push young users towards less regulated platforms. The regulatory authority’s initial compliance assessment reveals that following implementation, significant loopholes remain—children continue finding ways to bypass age verification mechanisms, and platforms have had difficulty stop new underage accounts from being created. Critics contend that the ban’s success depends not merely on regulatory oversight but on whether young people will genuinely abandon major social networks or simply shift towards other platforms, secure messaging apps, or VPNs designed to mask their age and location.
The ban’s global implications add another layer of complexity to assessments of its success. Countries including the United Kingdom, Canada, and various European states are monitoring Australia’s approach closely, considering similar legislation for their own citizens. If the ban fails to reduce children’s digital engagement or does not protect them from damaging material, it could weaken the case for comparable regulations elsewhere. Conversely, if implementation proves sufficiently strict to genuinely restrict underage usage, it may encourage other nations to implement similar strategies. The conclusion will potentially determine worldwide regulatory patterns for years to come, making Australia’s implementation efforts scrutinised far beyond its borders.
Those Who Profit and Who Loses
Mental health supporters and child safety organisations have championed the ban as a necessary intervention against algorithmic manipulation and contact with harmful content. Parents and educators maintain that removing young Australians platforms designed to maximise engagement could reduce anxiety, improve sleep patterns, and reduce exposure to cyberbullying. Tech companies’ own research has recognised the mental health risks linked to social media use amongst adolescents, adding weight to these concerns. However, the ban also eliminates legitimate uses of social media for young people—maintaining friendships, accessing educational content, and participating in online communities around shared interests. The regulatory framework assumes harm exceeds benefit, a calculation that some young people and their families challenge.
The ban’s real-world effects extends beyond individual users to influence content creators, small businesses, and community organisations dependent on social media platforms. Young people who might have pursued creative careers through platforms like TikTok or Instagram now face legal barriers to participation. Small Australian businesses that rely on social media marketing no longer reach younger demographic audiences. Community groups, charities, and educational organisations find it difficult to engage young people through channels they previously utilised effectively. Meanwhile, the ban unexpectedly favours large technology companies with resources to build age verification infrastructure, possibly reinforcing their market dominance rather than reducing it. These unforeseen effects suggest the ban’s effects reach well further than the simple goal of child protection.
What Follows for Compliance Monitoring
Australia’s eSafety Commissioner has signalled a notable transition from passive monitoring to direct intervention, marking a critical turning point in the implementation of the under-16 ban. The watchdog will now gather evidence to determine whether companies have omitted “reasonable steps” to prevent underage access, a regulatory requirement that surpasses simply documenting that young people stay within these systems. This strategy requires concrete evidence that companies have established proper safeguards and procedures intended to prevent minors. The enforcement team has stated it will pursue investigations methodically, developing arguments that could trigger considerable sanctions for breach of requirements. This shift from oversight to intervention reveals growing frustration with the platforms’ current efforts and signals that voluntary cooperation alone will no longer suffice.
The enforcement phase raises critical issues about the sufficiency of sanctions and the concrete procedures for holding tech giants accountable. Australia’s statutory provisions provides regulatory tools, but their success hinges on the eSafety Commissioner’s readiness to undertake official proceedings and the platforms’ capacity to respond effectively. International observers, notably regulators in the Britain and Europe, will carefully track Australia’s regulatory approach and results. A robust enforcement effort could create a model for further jurisdictions contemplating comparable restrictions, whilst shortcomings might compromise the overall legislative structure. The coming months will prove crucial whether Australia’s groundbreaking legislation delivers real safeguards for adolescents or stays primarily ceremonial in its influence.
