Government sets out ‘adaptable’ approach to regulating fast-developing AI

The White Paper’s aim is to protect the public while avoiding heavy-handed legislation that could stifle innovation.
The White Paper’s aim is to protect the public while avoiding heavy-handed legislation that could stifle innovation.

The Government has set out its “adaptable” approach to regulating artificial intelligence, as it hopes to build public trust in the rapidly developing technology and tap its economic potential.

A White Paper, released on Wednesday, comes as the prevalence of AI has increased massively in recent years, with systems such as chatbot ChatGPT quickly becoming part of people’s everyday lives.

With the aim of striking a balance between regulation and innovation, the Government plans to use existing regulators in different sectors rather than giving responsibility for AI governance to a new single regulator.

The regulators should consider principles including safety, transparency and fairness to guide the use of AI in their industries.

This approach will mean there is more consistency across the regulatory landscape and that the rules can adapt as the fast-moving technology evolves, the Government hopes.

Science, Innovation and Technology Secretary Michelle Donelan said: “AI has the potential to make Britain a smarter, healthier and happier place to live and work.

“Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.

“Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.”

But critics have warned that with laws set to take a year or more to come into effect, risks will be unchecked just as the use of such tools is exploding.

Regulators have a year to issue guidance to organisations, the paper says, with legislation to be introduced “when parliamentary time allows” to ensure they are applying the principles consistently.

Labour’s shadow culture secretary Lucy Powell said: “This regulation will take months, if not years, to come into effect, meanwhile ChatGPT, Google’s Bard, and many others are making AI a regular part of our everyday lives.

“The Government risks reinforcing gaps in our existing regulatory system, and making the system hugely complex for businesses and citizens to navigate. At the same time as they’re weakening those foundations through their upcoming Data Bill.”

There are initially no new legal obligations on regulators, developers or users of AI, with the prospect of only a minimal duty on regulators in future, the Ada Lovelace Institute said.

Michael Birtwistle, an associate director at the research body, said: “The UK approach raises more questions than it answers on cutting-edge, general-purpose AI systems like GPT-4 and Bard, and how AI will be applied in contexts like recruitment, education and employment, which are not comprehensively regulated.

“The Government’s timeline of a year or more for implementation will leave risks unaddressed just as AI systems are being integrated at pace into our daily lives, from search engines to office suite software. We’d like to see more urgent action on these gaps.”

Prime Minister Rishi Sunak, since taking office last year, has spoken of his ambition to turn the UK into a “science superpower”.

In his recent Budget, Chancellor Jeremy Hunt promised to invest close to £1 billion to create a new supercomputer and establish a new AI Research Resource to help UK developers compete on the global market.

Those involved with AI are invited to provide feedback on the Government’s plans through a consultation by June 21.