Trust and ethics considerations ‘have come too late’ on AI technology

(Yui Mok/PA)
(Yui Mok/PA) (Yui Mok/PA)

Considerations about trust and ethics in the development of artificial intelligence (AI) technologies have come too late in the process, a leading expert has said.

Dr Lynn Parker, who served in the White House Office of Science and Technology Policy between 2018 and 2022, said that until five years ago, ethics was rarely discussed within the AI research community at technology conferences.

She said AI systems – which involve the simulation of human intelligence processes by machines – has been around since the 1950s but added that as these technologies are becoming more widespread, questions are being raised about the ability to trust them.

Speaking at the IEEE International Conference on Robotics and Automation (ICRA) conference at ExCel in London, Dr Parker, director of AI Tennessee Initiative at the University of Tennessee, said: “If you look at the dialogue today around artificial intelligence, you can see that so much of that discussion is about the issue of trust, or more specifically, the lack of trust in AI systems.

“AI has been around since the 50s, so it’s not like the research has just happened. But the discussion now is because we realised that AI is impacting near nearly every sector of society.

“And so, because of that, these technologies then have been thrust upon the public and and now the public is standing up and saying, ‘What are we doing about trust?’.

Geoffrey Hinton, dubbed the “godfather” of AI, resigned from his job at Google earlier this month, saying that in the wrong hands, AI technologies could be used to to harm people and spell the end of humanity.

On Tuesday, he and other big names in the industry – including Sam Altman, chief executive of both ChatGPT’s developer Open and its maker OpenAI, and Demis Hassabis, chief executive of Google DeepMind – called for global leaders to work towards mitigating the risk of “extinction” from the technology.

A growing number of experts have said AI development should be slowed or halted, with more than 1,000 tech leaders – from Twitter boss Elon Musk to Apple co-founder Steve Wozniak – signing a letter in March to call for a “moratorium”.

AI apps such as Midjourney and ChatGPT have gone viral on social media sites, with users posting fake images of celebrities and politicians, and students using ChatGPT and other “language learning models” to generate university-grade essays.

But AI can also perform life-saving tasks, such as algorithms analysing medical images like X-rays, scans and ultrasounds, helping doctors to identify and diagnose diseases such as cancer and heart conditions more accurately and quickly.

Last week Prime Minister Rishi Sunak spoke about the importance of ensuring the right “guard rails” are in place to protect against potential dangers, ranging from disinformation and national security to “existential threats”, while also driving innovation.

But Dr Parker said that despite the increased attention on ethics across AI research, it is too late in the development cycle of going from research to widespread adoption in society.

She said: “We have an AI ethics research area… we have AI technical research areas, and they have not really connected.”

Speaking about social implications, Dr Parker said that for robots to be accepted as part of society, ethics and trustworthiness need to be built into every step of the research.

She said: “Social robotics, of course, has important ethical considerations, probably more so than other areas of robotics, because the social robots are in the same workspace with people.

“They’re often working closely with people, and certainly, there has been important research that’s been done in looking at these ethical considerations of social robotics.

“Social robots are often being used with vulnerable populations – with children, with the elderly, (with) maybe those who are ill… (and) there can be some implications of that as it relates to societal concerns.”