News

Facebook Live rules tightened following Christchurch attack

The social network is introducing a stricter ‘one strike’ policy to prevent terrorist and violent Facebook Live broadcasts.
The social network is introducing a stricter ‘one strike’ policy to prevent terrorist and violent Facebook Live broadcasts. The social network is introducing a stricter ‘one strike’ policy to prevent terrorist and violent Facebook Live broadcasts.

Facebook will restrict people who have broken certain rules from using its Live streaming feature, in response to the mosque terror attack in Christchurch, New Zealand.

The social network is toughening its stance on live broadcasts with a “one strike” policy being placed on any account which violates Facebook’s most serious policies from their first offence.

This means that, for example, if someone were to share a statement from a terrorist group with no context, they will be immediately blocked from using Live for a set period of time, such as 30 days.

Facebook said it intends to extend restrictions into other areas over the coming weeks, beginning with preventing those same people from creating ads on Facebook.

“We recognise the tension between people who would prefer unfettered access to our services and the restrictions needed to keep people safe on Facebook,” Guy Rosen, Facebook vice president of integrity, said.

“Our goal is to minimise risk of abuse on Live while enabling people to use Live in a positive way every day.”

At the time, Facebook said that the video was viewed fewer than 200 times during its live broadcast and was was viewed about 4,000 times in total before being removed.

In addition, the company has pledged 7.5 million dollars (£5.8 million) towards new research partnerships in a bid to improve its ability to automatically detect offending content after some manipulated edits of the Christchurch attack managed to bypass existing detection systems.

It will work with the University of Maryland, Cornell University and The University of California, Berkeley, to develop new techniques that detect manipulated media, whether it is imagery, video or audio, as well as ways to distinguish between people who unwittingly share manipulated content and those who intentionally create them.

“This work will be critical for our broader efforts against manipulated media, including DeepFakes,” Mr Rosen continued.

“We hope it will also help us to more effectively fight organised bad actors who try to outwit our systems as we saw happen after the Christchurch attack.”

Google also struggled to remove new uploads of the attack on its video sharing website YouTube.

During the opening of a safety engineering centre (GSEC) in Munich on Tuesday, Google’s senior vice president for global affairs, Kent Walker, admitted that the tech giant still needed to improve its systems for finding and removing dangerous content.

“In the situation of Christchurch, we were able to avoid having live-streaming on our platforms, but then subsequently we were subjected to a really somewhat unprecedented attack on our services by different groups on the internet which had been seeded by the shooter,” Mr Walker said.

Google, Facebook, Microsoft and Twitter are taking part in a summit in Paris on Wednesday involving French president Emmanuel Macron and New Zealand’s prime minister Jacinda Ardern, to address terrorist and violent content online.