Australia’s eSafety Commission Julie Inman Grant announced Friday that her office is going to mandate search engines to remove AI generated child sexual abuse material from search results. The announcement is in response to the growing threat and risk to children’s privacy and rights facilitated by AI and deepfakes.
The online safety codes and standards will focus on protections against new risks posed by the integration of generative AI and will cover multiple sections of the online industry. The new codes and standards will require industry participants to take appropriate measures to address the risk of class 1 material, including child sexual abuse material, on their services in Australia. Another important requirement is that AI functionality integrated with the search engines are not to be used to generate “synthetic” versions of this material.
Inman Grant commented on the growing use of generative AI, saying:
When the biggest players in the industry announced they would integrate generative AI into their search functions we had a draft code that was clearly no longer fit for purpose and could not deliver the community protections we required and expected
The online safety code will be drafted in accordance with the Online Safety Act 2021 (Cth) (“the Act”). The Act is the primary source of law in Australia that regulates illegal and restricted online content, and gives the eSafety Commisioner substantial new powers to protect Australians against harm.
The registered industry codes currently apply to five online industry sectors, namely Social Media Services, Internet Carriage Services, App Distribution Services, Hosting Services, and Equipment providers. The new industry standards will focus on two further industry sectors, namely Designated Internet Service and Relevant Electronic Services.
Article: Australia to require search engines to remove AI-generated child abuse content from results