Australia Targets App Stores and Search Engines in AI Age Crackdown

Australia Targets App Stores and Search Engines in AI Age Crackdown

Australia is intensifying its crackdown on artificial intelligence (AI) services as part of its ongoing efforts to protect minors. The country’s eSafety regulator announced plans to enforce age verification for users accessing potentially harmful content through various online platforms.

New Regulations Target AI Services

Beginning March 9, 2023, internet services in Australia, including prominent applications and search engines, must implement age restrictions. The regulations specifically aim to prevent users under 18 from accessing content related to pornography, extreme violence, self-harm, and eating disorders. Failure to comply could result in fines totaling up to A$49.5 million (approximately $35 million).

Scope of Compliance

A recent review revealed that over half of the 50 most popular AI platforms had not publicly shared their plans for age verification by the upcoming deadline. The review assessed each platform’s responses to queries about restricted content and moderation policies. Findings indicated:

  • Nine platforms had either announced or implemented age assurance systems.
  • Eleven platforms had blanket content filters to prevent access to restricted materials.
  • A notable 30 platforms had not initiated any steps to comply with the regulations.

Prominent AI providers, such as OpenAI’s ChatGPT and Character.AI, have faced scrutiny regarding their interaction with younger users. Reports indicate that children as young as 10 are engaging with AI tools for extensive periods.

Government’s Stand on AI and Youth Protection

Australia’s proactive stance follows its historic decision in December 2022 to become the first nation to ban social media for teenagers over mental health concerns. This move has prompted international discussions about the regulation of online platforms.

A spokesperson for eSafety emphasized that all companies operating in Australia are responsible for understanding and meeting their legal obligations. The regulator has indicated it will take action against any non-compliant services, particularly targeting major platforms such as search engines and app stores.

Industry Response

Responses from within the industry have been mixed. For example, Apple has noted it will employ “reasonable methods” to prevent minors from downloading age-restricted apps, although specific tactics remain vague. Meanwhile, Google’s representative declined to comment on the matter.

Jennifer Duxbury from the internet industry group DIGI highlighted the need for chatbot services to be informed about the new rules. However, compliance among these platforms remains minimal.

Concerns About Young Users and AI

Experts have voiced concerns regarding the emotional manipulation tactics sometimes utilized by AI companies, which could entrench young users in excessive interaction with chatbots. Without established guidelines, the potential for harm remains significant.

Lisa Given of RMIT University criticized the design processes of many AI tools, suggesting that they ignore safety controls necessary for protecting users. The urgency for implementing effective regulations has never been clearer, as Australia balances technological advancement with the safety of its youth.

Conclusion

As Australia sets a precedent in regulating AI services, the international community watches closely. The nation’s drive to enforce age verification represents a bold initiative in safeguarding minors from online harms associated with artificial intelligence.