Leading AI companies commit to stopping generative AI child abuse media

Amazon, Google, Meta, Microsoft, and OpenAI were some of the companies that committed to fighting AI-generated child sexual abuse material

April 24, 2024 12:19 pm | Updated 12:57 pm IST

The tech companies are working with Thorn, an organisation that uses tech to combat child abuse [File]

The tech companies are working with Thorn, an organisation that uses tech to combat child abuse [File] | Photo Credit: REUTERS

Major AI and tech companies including OpenAI, Google, and Meta have committed to an agreement aimed at cracking down on child sexual abuse media created with generative AI technologies and tools.

The tech companies are working with Thorn, an organisation that uses tech to combat child abuse, as well as All Tech is Human, a non-profit that handles technology problems.

“In collaboration with Thorn and All Tech Is Human, Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, OpenAI, and Stability AI publicly committed to Safety by Design principles. These principles guard against the creation and spread of AI-generated child sexual abuse material (AIG-CSAM) and other sexual harms against children,” said Thorn in an official post on Tuesday.

Thorn urged these companies to use safe and ethical data sets that do not contain child sexual abuse imagery, to stress-test their AI media generation processes, and also develop child abuse detection tools and solutions.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

Apart from fears that criminals could use generative AI to create child abuse media or target real children with the same, experts have raised concerns about AI models being trained with bad-quality data sets that include child sexual abuse/exploitation material (referred to as CSAM or CSEM) at their source.

One example of this is the huge LAION-5B data set with billions of images, which a Stanford Internet Observatory report said could contain large amounts of possible child abuse material, reported Bloomberg in December 2023.

“For some models, their compositional generalization capabilities further allow them to combine concepts (e.g. adult sexual content and non-sexual depictions of children) to then produce AIG-CSAM. Avoid or mitigate training data with a known risk of containing CSAM and CSEM,” said Thorn in its post, adding that such content should be removed from data sets and reported to the authorities.

Hollywood actor Ashton Kutcher co-founded Thorn in 2009 with his former partner Demi Moore, while his current partner Mila Kunis was an observer on the board.

However, Kutcher and Kunis were met with public fury last year after it was discovered that they wrote letters in support of former co-star Danny Masterson, who was convicted of rape. In the letters, the two actors vouched for Masterson’s character and praised him.

As fury spread and their involvement with the anti-child sex abuse organisation was called into question, Kutcher and Kunis filmed an apology video together.

In September, Kutcher resigned as the chairman of Thorn’s board. Kunis also left her position.

While large tech companies are legally obligated to put in place safeguards to stop AI-enabled child abuse media from being made, there are countless smaller unregulated platforms and tools online that do not apply such content filters.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in


Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.