More than £100 million will be spent preparing the UK to regulate artificial intelligence and use the technology safely, the Government said, as it looks to make Britain a global leader in the field.
As part of a Government response to the AI Regulation White Paper consultation, it has announced plans to spend £90 million launching new AI research hubs across the UK that will look into ways of using AI responsibly across areas such as healthcare, chemistry and mathematics.
In addition, the plans include a £19 million investment in 21 projects aiming to develop safe and trusted AI tools which could be used to boost productivity.
A further £10 million investment has also been unveiled to help prepare and upskill regulators across different sectors, such as finance, healthcare, education and telecoms, so that they are ready and able to address the risk and harness the opportunities of AI, the Government said.
As laid out in the AI white paper, first published last year, the Government has chosen to use existing regulators to take on the role of monitoring artificial intelligence use within their own sectors rather than creating a new, central regulator dedicated to the emerging technology.
Ministers have argued this is a more agile approach to the issue.
“The UK’s innovative approach to AI regulation has made us a world leader in both AI safety and AI development,” Technology Secretary Michelle Donelan said.
“I am personally driven by AI’s potential to transform our public services and the economy for the better – leading to new treatments for cruel diseases like cancer and dementia, and opening the door to advanced skills and technology that will power the British economy of the future.
“AI is moving fast, but we have shown that humans can move just as fast. By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely.”
The new investment also includes £2 million of Arts and Humanities Research Council (AHRC) funding to support research projects looking to define what responsible AI looks like.
A further £9 million has also been committed to the Government’s International Science Partnerships Fund, which brings together researchers in the UK and US to work on safe AI tools.
The new wave of funding follows the £100 million spend on launching the world’s first AI Safety Institute to monitor and evaluate the potential dangers of new AI models.
At the AI Safety Summit at Bletchley Park last November, major tech firms agreed to submit their new models for review before public launch, something Google confirmed it had begun doing with its latest Gemini model.
Elsewhere in its response, the Government has asked key regulators, including Ofcom and the Competition and Markets Authority, to publish their approach to managing AI, including naming AI-related risks in their areas and how the plan to regulate the technology over the coming year, by April 30.
Labour’s shadow minister for AI and intellectual property Matt Rodda said: “While it is welcome to see the Government finally setting out some information about this crucial technology, ministers are still missing a plan to introduce legislation that safely grasps the many opportunities AI presents.
“The United States issued an Executive Order setting out rules and regulations to keep US citizens safe and the EU is currently finalising legislation, but the UK is still lagging far behind with this white paper response being reportedly repeatedly delayed.
“Unlike the Tories, Labour sees AI as fundamental to our mission to grow the economy. We will seize the opportunities AI offers to revolutionise healthcare, boost the NHS and improve our public services with safety baked in at every stage of the process.”