Leading AI scientists pledge not to create killer robots
Thousands of researchers specialising in artificial intelligence have signed an agreement not to help develop autonomous weapons capable of killing.
The co-founders of Google’s AI division DeepMind and Elon Musk, the SpaceX and Tesla supremo, are among the 2,400 signatures on a letter against the creation of AI systems able to make life-taking decisions.
The letter, put together by the Future of Life Institute, calls for governments to adopt regulations and laws against lethal autonomous weapons and pledge to “neither participate in nor support the development, manufacture, trade or use” of “killer robots”.
“There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI,” the experts said.
“Lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilising for every country and individual.
“Thousands of AI researchers agree that by removing the risk, attributability and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.”
Last year, the UK Ministry of Defence said it was not in possession of fully autonomous weapons and insisted weapons should always be under human control.
“It’s absolutely right that our weapons are operated by real people capable of making incredibly important decisions, and we are guaranteeing that vital oversight,” said armed forces minister Mark Lancaster.
A new Tempest fighter jet was revealed by Defence Secretary Gavin Williamson earlier this week, which will be able to fly unmanned, alongside a host of next-generation technology.