Why everyone’s talking about Facebook’s ‘deepfake’ ban

Social network to remove videos modified by artificial intelligence from its platforms

Facebook F8
Facebook CEO Mark Zuckerberg
(Image credit: Josh Edelson/AFP/Getty Images)

Facebook has announced plans to remove any videos doctored using artificial intelligence (AI) from its social networking platforms ahead of the upcoming US presidential election.

The videos, known as “deepfakes”, are modified to look real and have been shown to be highly convincing and difficult to debunk online.

In a blog post this week, Facebook admitted that deepfakes present a “significant challenge” to technology and social networking sites, but promised to tackle “all types of manipulated media”.

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

What is a deepfake?

Deepfakes are videos in which AI has been used to superimpose the face of a person onto the body of another.

The controversial practice first made headlines in 2017, when internet users published pornographic videos featuring the likenesses of female celebrities including Taylor Swift and Katy Perry.

Most of the sites hosting the videos subsequently removed the content and banned them from their platforms, but fake footage featuring high-profile figures from a variety of different fields has continued to spread online.

“While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases,” writes Monika Bickert, vice president of global policy management at Facebook, in the blog post.

Why is Facebook banning them?

The social media giant, which also owns Instagram, has pledged to removed doctored videos if it wasn’t obvious to an average person that they have been edited, or if they give a false impression that the subject of the video has said or done something that they have not.

“There are people who engage in media manipulation in order to mislead,” warns Bickert in the blog.

Banning deepfake videos is part of Facebook’s attempts to get ahead of a wave of new misleading media and content expected to be shared in the run-up to the election in the US in November.

And the reaction?

Some critics argue that Facebook’s new policy does not go far enough. For instance, the ban on deepfakes will not apply to videos deemed to be parody or satire.

The social network was criticised last summer for refusing to remove a viral video of US House of Representatives Speaker Nancy Pelosi that was doctored to make it sound like she was drunkenly slurring her words.

In a statement to Reuters this week, Facebook said: “The doctored video of Speaker Pelosi does not meet the standards of this policy and would not be removed. Only videos generated by artificial intelligence to depict people saying fictional things will be taken down.”

So-called “shallow fakes” – videos made using conventional editing tools and not edited by AI – will also still be allowed.

To date, there have been no major examples of deepfake content being uploaded to Facebook platforms that would break the new rules. As The Guardian notes, the “most damaging examples of manipulated media in recent years have tended to be created using simple video-editing tools”.

But computer scientist William Tunstall-Pedoe, whose AI company Evi invented the technology behind Amazon’s Alexa, told the BBC that Facebook deserved credit for trying to tackle the “difficult area”.

“The fact the video is fake and intended to be misleading is the key thing for me,” he said. “Whether sophisticated AI techniques are used or less sophisticated techniques isn’t relevant.”

Why is Facebook worried about the election?

The multinational is still dealing with the fallout of being accused of allowing the spread of disinformation during the 2016 presidential election and the 2018 US midterms.

And Facebook’s reputation has taken a further hit from its involvement in the Cambridge Analytica data harvesting scandal and controversies over targeted political advertising.

The resulting pressure from lawmakers, journalists and activists to crack down on the spread of misleading or false information may be a major factor in the new ban on deepfakes.

To continue reading this article...
Continue reading this article and get limited website access each month.
Get unlimited website access, exclusive newsletters plus much more.
Cancel or pause at any time.
Already a subscriber to The Week?
Not sure which email you used for your subscription? Contact us