Facebook cracks down on deepfakes in latest anti-misinformation effort
Facebook is cracking down on deepfake videos in the lead-up to the 2020 US presidential election, though humour will be immune.
The social network has said it will remove misleading manipulated media that has been edited in ways that “aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say”.
Videos will also be banned if they are made by AI or machine learning that “merges, replaces or superimposes content on to a video, making it appear to be authentic”.
“This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words,” the company announced.
The move is the latest effort by the tech giant to rein in misinformation, after its decision not to fact check political advertising emerged ahead of the UK general election.
Facebook came under the spotlight last year for allowing an altered video of US House Speaker Nancy Pelosi to remain on its platform.
Under the new rules, the firm says it can stay online because it does not meet the standards of the policy and only videos generated by artificial intelligence to show people saying fictional things will be taken down.
Once the video of Ms Pelosi was rated by a third-party fact-checker, its distribution was reduced and those who tried to share it – or already had done – received warnings that it was false, the social network added.