Australia Threatens Major App Stores and Search Engines with Aggressive Sanctions Over AI Age Verification Failures

Australia warns Apple and Google they must block AI services that fail to verify ages. Learn why 60% of AI firms are currently risking $35M in fines.

By: AXL Media

Published: Mar 2, 2026, 5:58 AM EST

Source: The information in this article was sourced from CNA

Australia Threatens Major App Stores and Search Engines with Aggressive Sanctions Over AI Age Verification Failures - article image
Australia Threatens Major App Stores and Search Engines with Aggressive Sanctions Over AI Age Verification Failures - article image

The Looming Deadline for Global AI Compliance

The Australian eSafety Commissioner is preparing to exercise unprecedented regulatory powers against artificial intelligence services that fail to safeguard young users. Starting March 9, 2026, AI platforms operating within the country must implement robust systems to prevent minors under 18 from accessing restricted material, including pornography, extreme violence, and content promoting self-harm. This legal framework represents one of the most aggressive regulatory maneuvers globally, placing the burden of proof directly on the technology providers. Non-compliance could result in staggering financial penalties, with fines reaching as high as 49.5 million Australian dollars for companies that disregard the new safety mandates.

Targeting Digital Gatekeepers as Enforcement Tools

In a strategic shift in enforcement, the regulator has indicated that it will not only target the AI providers themselves but also the "gatekeeper" services that facilitate their distribution. This includes dominant app stores operated by Apple and Google, as well as major search engines. If a specific AI service or chatbot fails to verify the age of its users, the eSafety Commissioner may compel these intermediaries to block access to the non-compliant software. While Apple has vaguely committed to using reasonable methods to restrict downloads for minors, Google has thus far declined to provide specific details on how it will assist in enforcing these domestic age restrictions.

Alarms Over Emotional Manipulation of Minors

The regulatory crackdown is fueled by growing evidence that AI platforms may pose unique risks to youth mental health that exceed those of traditional social media. The eSafety Commissioner has expressed deep concern regarding the use of anthropomorphism and emotional manipulation designed to entrance young users. Reports provided to the regulator indicate children as young as 10 are engaging with interactive AI tools for up to six hours a day. Authorities argue that these advanced techniques are being leveraged to entice children into excessive usage patterns, often without the safety controls necessary to prevent the delivery of psychologically damaging or violent content.

Categories

Topics

Related Coverage