Tech giants including TikTok and Snapchat have signed a pledge vowing to tackle the spread of artificially generated images of child sex abuse.
Home Secretary Suella Braverman hosted the companies, as well as charities, academics and representatives of the UK and Australian governments at an event on Monday to address the “shocking” spread of AI-generated material depicting children being abused.
It comes after an internet safety organisation warned that the realistic imagery risks normalising sexual violence against children.
The images, although computer-generated, are often based on real material, “revictimising” the survivors of that past abuse, the Internet Watch Foundation (IWF) said.
The event, held in partnership with the IWF, was taking place before Rishi Sunak’s AI Safety Summit at Bletchley Park later this week.
Ms Braverman said: “Child sexual abuse images generated by AI are an online scourge. This is why tech giants must work alongside law enforcement to clamp down on their spread. The pictures are computer-generated but they often show real people – it’s depraved and damages lives.
“The pace at which these images have spread online is shocking and that’s why we have convened such a wide group of organisations to tackle this issue head-on. We cannot let this go on unchecked.”
IWF data found that thousands of child sex abuse pictures could be found on the dark web and are realistic enough to be treated as real imagery under UK law.
The organisation, which works to identify and remove online images and videos of child abuse, said the AI-generated material could make it harder to spot when real children might be in danger.
In 2023, we found the first reports of child sexual abuse material (CSAM) generated by artificial intelligence (AI). Read our Research Report with the latest findings on how #AI is increasingly being used to create child sexual abuse imagery online: https://t.co/kj7eEbhIp2 pic.twitter.com/51UsipTlfm
— Internet Watch Foundation (IWF) (@IWFhotline) October 30, 2023
IWF chief executive Susie Hargreaves said: “The realism of these images is astounding, and improving all the time. The majority of what we’re seeing is now so real, and so serious, it would need to be treated exactly as though it were real imagery under UK law.
“It is essential, now, we set an example and stamp out the abuse of this emerging technology before it has a chance to fully take root. It is already posing significant challenges.”
The joint statement signed by tech companies including OnlyFans and Stability AI pledged to sustain “technical innovation around tackling child sexual abuse in the age of AI”, according to the Home Office.
The statement affirms that AI must be developed in “a way that is for the common good of protecting children from sexual abuse across all nations”.
Other signatories included the NSPCC, National Crime Agency and National Police Chiefs’ Council.
The Government is looking into further investment into using AI to combat child sexual abuse.
It already uses the technology to sort through large volumes of data and grade the severity of the material, helping police identify offenders and safeguard children, the Home Office said.
NSPCC chief executive Sir Peter Wanless said: “Already we are seeing AI child abuse imagery having a horrific impact on children, traumatising and retraumatising victims who see images of their likeness being created and shared. This technology is giving offenders new ways to organise and risks enhancing their ability to groom large numbers of victims with ease.
“It was important to see child safety on the agenda today. Further international and cross-sector collaboration will be crucial to achieve safety by design.”