News

Facebook auto-generating videos showing extremist images

Deatil emerged in a whistleblower’s complaint to the US Securities and Exchange Commission.
Deatil emerged in a whistleblower’s complaint to the US Securities and Exchange Commission. Deatil emerged in a whistleblower’s complaint to the US Securities and Exchange Commission.

A confidential whistleblower’s complaint to a US financial watchdog alleges that Facebook has exaggerated its success in removing extremist content.

Even worse, it shows that the company is inadvertently using propaganda by militants and hate groups to auto-generate videos and pages that could be used for networking.

Facebook likes to give the impression that it is staying ahead of extremists by taking down their posts, often before users even see them, but over five months last year, researchers monitored pages by users who affiliated themselves with groups designated by the US as terrorist organisations.

In that period, only 38% of posts with prominent symbols of extremist groups were removed.

The confidential whistleblower's report
The confidential whistleblower's report The confidential whistleblower’s report (Jon Elswick/AP) (Jon Elswick/AP)

The social media company concedes that its systems are not perfect, but says they are improving.

The whistleblower’s complaint to the US Securities and Exchange Commission was obtained by the Associated Press

In its own review, the AP found that as of this month, much of the banned content cited in the study — an execution video, images of severed heads, propaganda honouring martyred militants — slipped through the algorithmic web and remained easy to find on Facebook.

The complaint is landing as Facebook tries to stay ahead of a growing array of criticism over its privacy practices and its ability to keep hate speech, live-streamed murders and suicides off its service.

In the face of criticism, chief executive Mark Zuckerberg has spoken of his pride in the company’s ability to weed out violent posts automatically through artificial intelligence.

Mark Zuckerberg
Mark Zuckerberg Mark Zuckerberg (Niall Carson/PA)

Last month, he repeated a carefully worded formulation Facebook has been employing: “In areas like terrorism, for al Qaida and Isis-related content, now 99% of the content that we take down in the category our systems flag proactively before anyone sees it.

“That’s what really good looks like.”

He did not offer an estimate of how much of total prohibited material is being removed.

The research behind the SEC complaint is aimed at spotlighting flaws in the company’s approach.

While the study is far from comprehensive — in part because Facebook rarely makes much of its data publicly available — researchers involved in the project say the ease of identifying these profiles using a basic keyword search and the fact that so few of them have been removed suggest that Facebook’s claims that its systems catch most extremist content are not accurate.

“I mean, that’s just stretching the imagination to beyond incredulity,” says Amr Al Azm, one of the researchers involved in the project.

Amr Al Azm
Amr Al Azm Amr Al Azm (John Minchillo/AP) (John Minchillo/AP)

“If a small group of researchers can find hundreds of pages of content by simple searches, why can’t a giant company with all its resources do it?”

Facebook concedes that its systems are not perfect, but says it is making improvements.

“After making heavy investments, we are detecting and removing terrorism content at a far higher success rate than even two years ago,” the company said in a statement.

“We don’t claim to find everything and we remain vigilant in our efforts against terrorist groups around the world.”