UK

Internet Watch Foundation confirms first AI-generated child sex abuse images

The Internet Watch Foundation has warned of the threat from AI-generated images of child sex abuse (Dominic Lipinski/PA)
The Internet Watch Foundation has warned of the threat from AI-generated images of child sex abuse (Dominic Lipinski/PA) The Internet Watch Foundation has warned of the threat from AI-generated images of child sex abuse (Dominic Lipinski/PA)

Tackling the threat from artificially generated images of child sex abuse must be a priority at the UK-hosted global AI summit this year, an internet safety organisation warned as it published its first data on the subject.

Such “astoundingly realistic images” pose a risk of normalising child sex abuse and tracking them to identify whether they are genuine or artificially created could also distract from helping real victims, the Internet Watch Foundation (IWF) said.

The organisation – which works to identify and remove online images and videos of child abuse – said while the number of AI images being identified is still small “the potential exists for criminals to produce unprecedented quantities of life-like child sexual abuse imagery”.

Of 29 URLs (web addresses) containing suspected AI-generated child sexual abuse imagery reported to the IWF between May 24 and June 30, seven were confirmed to contain AI-generated imagery.

This is the first data on AI-generated child sexual abuse imagery the IWF has published.

It said it could not immediately give locations for which countries the URLs were hosted in, but that the images contained Category A and B material – some of the most severe kinds of sexual abuse – with children as young as three years old depicted.

Its analysts also discovered an online “manual” written by offenders with the aim of helping other criminals train the AI and refine their prompts to return more realistic results.

The organisation said such imagery – despite not featuring real children – is not a victimless crime, warning that it can normalise the sexual abuse of children, and make it harder to spot when real children might be in danger.

Last month, Rishi Sunak announced the first global summit on artificial intelligence (AI) safety to be held in the UK in the autumn, focusing on the need for international co-ordinated action to mitigate the risks of the emerging technology generally.

Susie Hargreaves, chief executive of the IWF, said fit-for-purpose legislation needs to be brought in “to get ahead” of the threat posed by the technology’s specific use to create child sex abuse images.

She said: “AI is getting more sophisticated all the time. We are sounding the alarm and saying the Prime Minister needs to treat the serious threat it poses as the top priority when he hosts the first global AI summit later this year.

“We are not currently seeing these images in huge numbers, but it is clear to us the potential exists for criminals to produce unprecedented quantities of life-like child sexual abuse imagery.

“This would be potentially devastating for internet safety and for the safety of children online.

“Offenders are now using AI image generators to produce sometimes astoundingly realistic images of children suffering sexual abuse.

“For members of the public – some of this material would be utterly indistinguishable from a real image of a child being sexually abused. Having more of this material online makes the internet a more dangerous place.”

Susie Hargreaves of the Internet Watch Foundation
Susie Hargreaves of the Internet Watch Foundation Susie Hargreaves of the Internet Watch Foundation said the PM must treat the threat as a top priority (Internet Watch Foundation/PA)

She said the continued abuse of this technology “could have profoundly dark consequences – and could see more and more people exposed to this harmful content”.

She added: “Depictions of child sexual abuse, even artificial ones, normalise sexual violence against children. We know there is a link between viewing child sexual abuse imagery and going on to commit contact offences against children.”

Dan Sexton, chief technical officer at the IWF, said: “Our worry is that, if AI imagery of child sexual abuse becomes indistinguishable from real imagery, there is a danger that IWF analysts could waste precious time attempting to identify and help law enforcement protect children that do not exist.

“This would mean real victims could fall between the cracks, and opportunities to prevent real life abuse could be missed.”

He added that the machine learning to create the images, in some cases, has been trained on data sets of real child victims of sexual abuse, therefore “children are still being harmed, and their suffering is being worked into this artificial imagery”.

The National Crime Agency (NCA) said while AI-generated content features only “in a handful of cases”, the risk “is increasing and we are taking it extremely seriously”.

Chris Farrimond, NCA director of threat leadership, said: “The creation or possession of pseudo-images – one created using AI or other technology – is an offence in the UK. As with other such child sexual abuse material viewed and shared online, pseudo-images also play a role in the normalisation and escalation of abuse among offenders.

“There is a very real possibility that if the volume of AI-generated material increases, this could greatly impact on law enforcement resources, increasing the time it takes for us to identify real children in need of protection.”