Schoolchildren in the UK are now using AI to generate indecent images of other children, internet safety groups have warned.
The UK Safer Internet Centre (UKSIC) said it has begun receiving reports from schools that children are making, or attempting to make indecent images of other children using AI image generators.
The child protection organisation said such images – which legally constitute child sexual abuse material – could have a harmful impact on children, and warned that it could also be used to abuse or blackmail them.
UKSIC has urged schools to ensure that their filtering and monitoring systems were able to effectively block illegal material across school devices in an effort to combat the rise of such activity.
David Wright, UKSIC director, said: “We are now getting reports from schools of children using this technology to make, and attempt to make, indecent images of other children.
“This technology has enormous potential for good, but the reports we are seeing should not come as a surprise.
“Young people are not always aware of the seriousness of what they are doing, yet these types of harmful behaviours should be anticipated when new technologies, like AI generators, become more accessible to the public.
“We clearly saw how prevalent sexual harassment and online sexual abuse was from the Ofsted review in 2021, and this was a time before generative AI technologies.
“Although the case numbers are currently small, we are in the foothills and need to see steps being taken now, before schools become overwhelmed and the problem grows.
“An increase in criminal content being made in schools is something we never want to see, and interventions must be made urgently to prevent this from spreading further.
“We encourage schools to review their filtering and monitoring systems and reach out for support when dealing with incidents and safeguarding matters.”
In October, the Internet Watch Foundation (IWF), which forms part of UKSIC, warned that AI-generated images of child sexual abuse are now so realistic that many would be indistinguishable from real imagery, even to trained analysts.
The IWF said it had discovered thousands of such images online.
Artificial intelligence has increasing become an area of focus in the online safety debate over the last year, in particular, since the launch of generative AI chatbot ChatGPT last November, with many online safety groups, governments and industry experts calling for greater regulation of the sector because of fears it is developing faster than authorities are able to respond to it.