News

Facebook reveals amount of abusive content discovered and removed from site

The social network has released its internal enforcement figures for the first time.
The social network has released its internal enforcement figures for the first time. The social network has released its internal enforcement figures for the first time.

Facebook has released internal figures on abusive content found and removed from the site for the first time, revealing the scale of malicious content on the social network.

Under increasing pressure to disclose how it polices its platform, Facebook revealed it took down 837 million pieces of spam content between January and March of this year.

The social network said “nearly 100%” of this content was found and removed before it reported, and was accompanied by the removal of around 583 million fake accounts, most of which were disabled within minutes of being activated.

However, the social network admitted its automated tools were still struggling to pick up hate speech, with only 38% of more than 2.5 million posts removed having been spotted by the firm’s technology.

Facebook’s vice president of product management Guy Rosen said more work needed to be done to improve such detection tools.

“We have a lot of work still to do to prevent abuse. It’s partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important,” he said.

“For example, artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue.

“In addition, in many areas — whether it’s spam, porn or fake accounts — we’re up against sophisticated adversaries who continually change tactics to circumvent our controls, which means we must continuously build and adapt our efforts.

“It’s why we’re investing heavily in more people and better technology to make Facebook safer for everyone.”

The figures also revealed that Facebook believes between three and four per cent of its more than two billion monthly active users were fake accounts.

The site also said it took down 21 million pieces of nudity and sexual activity-related content, 96% of which was found and flagged by its systems before being reported.

In terms of graphic violent content, Facebook said more than 3.4 million posts were either taken down or given warning labels, 86% of which were spotted by its detection tools.

However, the social network said this was up from 1.2 million at the end of 2017 and while the majority of the increase was down to improvements in its detection technology, some of the rise is due to an increase in such content appearing on the platform.

Mr Rosen said making the figures public will “push” the company to improve more quickly.

“We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too,” he said.

“This is the same data we use to measure our progress internally — and you can now see it to judge our progress for yourselves.”

However, it comes after a House of Commons committee labelled written evidence from the firm to its fake news inquiry disappointing.

Commons Culture Committee chairman Damian Collins said Facebook failed to provide “a sufficient level of detail and transparency” in its response following an appearance by chief technology officer Mike Schroepfer.

The committee has also urged Facebook boss Mark Zuckerberg to appear before them, adding that it would be open to taking evidence from the billionaire company founder via video link if he would not attend in person.