News

Facebook ‘turned down’ AI tool to stop hate speech

AI firm Utopia Analytics says it offered to build Facebook a tool which could identify hate speech in ‘milliseconds’ but the social network refused.
AI firm Utopia Analytics says it offered to build Facebook a tool which could identify hate speech in ‘milliseconds’ but the social network refused. AI firm Utopia Analytics says it offered to build Facebook a tool which could identify hate speech in ‘milliseconds’ but the social network refused.

Facebook has been accused of turning down the chance to use an artificial intelligence tool which could have helped the firm detect online hate speech in near real-time.

Executives from Finland-based Utopia Analytics – which has created an AI content moderation tool it says can understand any language – said Facebook turned down offers to use the firm’s technology in 2018.

The AI company said it offered to create Facebook a tool in two weeks that could help it better moderate hate speech content originating in Sri Lanka, amid rising tensions in the country and reports of more hate speech appearing online.

Appearing before MPs on the House of Commons Digital, Culture, Media and Sport select committee on disinformation, Utopia chairman Tom Packalen said that when it approached the social network at that time, Facebook was “not interested” in its technology.

Utopia says its tools are able to understand context as well as informal and slang language and can analyse previous publishing decisions made by human moderators to inform its decisions, which then take place in “milliseconds”.

In a further statement, Utopia chief executive Mari-Sanna Paukkeri said: “In March 2018 we showed Facebook that we could get rid of the majority of the hate speech from their site within milliseconds of it appearing.

“Facebook have repeatedly claimed that this technology does not exist but despite what they may say, we have been using it successfully for over three years in many countries and with many businesses.”

Ms Paukkeri also claimed that if implemented, the tools could have made a difference in preventing or warning of the Easter terror attacks in Sri Lanka, which killed more than 250 people.

In the aftermath of the attacks, Sri Lankan authorities blocked social media amid concerns it was being used to incite violence in the country.

“It is a shame that Facebook decided that their internal considerations were more important than getting rid of the inflammatory rhetoric that was posted on their site,” she said.

“On the other hand, the dangerous and hate-filled language used was said to have been a contributing factor to the attacks so we will never know if taking it down would have made a difference.”

In response, Facebook said AI was an important tool in content moderation but said more research was still needed into the issue.

“We believe AI is a promising tool to help curb hate speech on the internet, but this is still very much an unsolved problem for the entire industry – it will take many more years of research and engineering to build AI systems that can understand language and context at a level that humans can,” a spokeswoman for the social network said.

The company also pointed to its own technology as being capable of spotting and removing hate speech.

“We have been working hard on these problems for many years, and as a result we were able to remove 4 million pieces of hate speech in the first quarter of this year – 65% of which was detected proactively by our technology before it was reported to us. We can also detect hate speech across 40 different languages,” the social network said.

“We know there is a lot more work to do here, and we will continue to invest in technology as well as our growing team of 15,000 content moderators to identify and remove this content. This year alone we plan to spend more on safety and security than our whole revenue at the time of our IPO. We’re doing this because it’s the right thing to do for the people who use our products.”