Business

Apple co-founder Wozniak in fresh warning over AI dangers

FLASHBACK: Apple co-founder Steve Wozniak speaking at a technology conference in Derry's Millennium Forum in 2013. He is the latest to warn of the potential dangers of AI
FLASHBACK: Apple co-founder Steve Wozniak speaking at a technology conference in Derry's Millennium Forum in 2013. He is the latest to warn of the potential dangers of AI

APPLE co-founder Steve Wozniak has reiterated his warning of the potential dangers from the massive growth of artificial intelligence (AI) in recent years, with systems such as chatbot ChatGPT now part of everyday life.

In an interview with the BBC, he warned that AI could make scams and misinformation harder to spot, and says he fears the technology will be harnessed by “bad actors”.

His comments come just weeks after he and other technology experts including Twitter owner Elon Musk urged scientists to pause developing AI for at least six months to ensure it does not pose a risk to humanity.

In a fresh interview with the BBC, he talked up the benefits of AI, but said its content should be clearly labelled, and regulation was needed for the sector.

He told technology editor Zoe Kleinman: “AI is so intelligent it's open to the bad players, the ones that want to trick you about who they are.”

He doesn't believe AI will replace people because it lacks emotion, but he did warn that, in his view, it will make bad actors even more convincing, because programmes like ChatGPT can create text which “sounds so intelligent”.

And he believes responsibility for anything generated byAI and then posted to the public, should rest with those who publish it: “A human really has to take the responsibility for what is generated by AI.”

Himself a pioneer of computing, he says he believes “we can't stop the technology, but we can prepare people so they are better educated to spot fraud and malicious attempts to take personal information”.

In his joint letter in March urging for the brakes to be applied on AI development he said: “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one – not even their creators – can understand, predict or reliably control.

“Contemporary AI systems are now becoming human-competitive at general tasks and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?”

It added: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

The technology chiefs do not want any AI systems more powerful than new chatbot GPT-4 and called for researchers to focus on making sure the technology is accurate, safe and transparent.

US tech firm OpenAI released its latest version of AI chatbot ChatGPT earlier this year.

ChatGPT was launched late last year and it has become an online sensation because of its ability to hold natural conversations but also to generate speeches, songs and essays.

The bot can respond to questions in a human-like manner and understand the context of follow-up queries, much like in human conversations. It can even admit its own mistakes or reject inappropriate requests.

According to OpenAI, GPT-4 has “more advanced reasoning skills” than ChatGPT but, like its predecessors, GPT-4 is still not fully reliable and may “hallucinate” – a phenomenon where AI invents facts or makes reasoning errors.

The letter said humanity can now enjoy an “AI summer” where it can reap the rewards of the systems but only once safety protocols have been made.

The letter added: “Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all and give society a chance to adapt.

“Society has hit pause on other technologies with potentially catastrophic effects on society.

“We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.”