News

AI now spotting more than half of abusive tweets, says Twitter

The social media site has detailed its policing of the platform in its latest Transparency Report.
The social media site has detailed its policing of the platform in its latest Transparency Report. The social media site has detailed its policing of the platform in its latest Transparency Report.

Abusive tweets are now spotted by artificial intelligence (AI) more often than by humans, Twitter said in its latest transparency report.

The social media giant said more than 50% of the tweets it had taken action on in the first half of this year had been flagged by its detection technology rather than human reviewers.

The firm’s latest transparency report, covering January to June 2019, also revealed a 105% increase in the number of accounts the company had either locked or suspended for breaking Twitter rules.

Twitter said the number of accounts suspended from the site for offences related to the promotion of terrorism continued to decline, down by 30% so far this year compared to 2018.

“Our continued investment in proprietary technology is steadily reducing the burden on people to report to us,” the social media giant said.

“For example, more than 50% of tweets we take action on for abuse are now being surfaced using technology. This compares to just 20% a year ago.

“Additionally, due to a combination of our increased focus on proactively surfacing potentially violative content for human review and the inclusion of impersonation data for the first time, we saw a 105% increase in accounts locked or suspended for violating the Twitter rules.”

According to the firm’s figures, reports of spam accounts on the platform remain “steady”, with Twitter recording about a 1% decrease in the number of spam accounts or spam-like behaviour being reported.

The number of reports of hateful content has risen by 48%, while the number of abusive content reports increased by 22%, Twitter said.

The report also said more than 244,000 accounts were suspended for rule violations linked to child sex exploitation – 91% of which were found using technology tools such as an image hashing software created by Microsoft known as PhotoDNA.

“We remain deeply committed to transparency at Twitter, it continues to be one of our key guiding principles,” the company said.

“This commitment is reflected in the evolution and expansion of the report in recent years.

“It now includes dedicated sections on platform manipulation and spam, our Twitter rules enforcement and state-backed information operations we’ve previously removed from the service.

“This report reflects not only the evolution of the public conversation on our service but the work we do every day to protect and support the people who use Twitter.”