News

Twitter working on rules to deal with deepfakes

The social network is specifically targeting manipulated media that could threaten someone’s physical safety or lead to offline harm.
The social network is specifically targeting manipulated media that could threaten someone’s physical safety or lead to offline harm. The social network is specifically targeting manipulated media that could threaten someone’s physical safety or lead to offline harm.

Twitter is beefing up the fight against manipulated videos like deepfakes, with new rules specifically targeting synthetic content that could threaten someone’s physical safety or lead to offline harm.

Social networks have grappled with the emergence of deepfakes, which are videos significantly edited from how they looked originally with the intention of distorting events.

Deepfakes commonly use artificial intelligence software to combine and superimpose existing images and videos of a person to make it look as though they have said something they have not.

US congresswoman Nancy Pelosi was a notable victim earlier this year, when a doctored video which falsely made her speech appear slurred went viral online.

Twitter wants feedback from the public on how its new policy on synthetic and manipulated media should be devised.

“We’re always updating our rules based on how online behaviours change,” the company said.

“We’re working on a new policy to address synthetic and manipulated media on Twitter – but first we want to hear from you.

“The new policy will address this type of content, specifically when it could threaten someone’s physical safety or lead to offline harm.

“In the coming weeks, we’ll announce a feedback period so that you can help us refine this policy before it goes live.”

Facebook is mulling over a policy change on deepfakes of its own, after chief executive Mark Zuckerberg admitted that its response to Ms Pelosi’s case was an “execution mistake” as its systems failed to detect the content before it spread.