Credited from: SCMP
Australia's internet regulator has announced plans to push search engines and app stores to block artificial intelligence services that do not verify user ages. This measure comes after a review found that more than half of popular AI platforms had not complied with impending age restriction rules, which take effect on March 9. The regulations mandate that AI services, including OpenAI's ChatGPT, prevent users under 18 from accessing harmful content like pornography and self-harm resources, or face fines up to A$49.5 million (approximately $35 million), according to Reuters and South China Morning Post.
This initiative follows Australia's precedent-setting ban on social media for users under the age of 16, which was implemented due to growing concerns about youth mental health. With disastrous cases of self-harm and violence linked to AI services on the rise globally, Australia aims to be at the forefront of safeguarding minors from these technologies. The eSafety commissioner stated that compliance would be strictly monitored, asserting, "eSafety will use the full range of our powers where there is non-compliance," which includes actions against both AI services and the "gatekeeper services" that provide access to these tools, reports Reuters and India Times.
A review conducted on 50 popular text-based AI products revealed that only nine had implemented or announced plans for age assurance systems, while an additional 11 had either applied blanket content filters or planned to block access for Australians entirely. The remaining 30 platforms showed no visible steps towards compliance. Leading AI platforms, including ChatGPT and Replika, have started rolling out stronger filters to meet these requirements, suggesting some readiness to abide by the new laws, according to South China Morning Post and India Times.
Amidst these developments, concerns have risen about minors using AI platforms for prolonged periods, with reports indicating children as young as 10 are interacting with these tools for hours each day. Regulatory officials have voiced apprehension that AI services may exploit emotional engagement techniques, further enticing young users into excessive use. This push for stronger regulations marks a significant shift in Australia's approach to youth safety, now extending beyond social media to encompass AI technologies, as highlighted by Reuters and India Times.