Monday, September 23, 2019

Deepfakes, Privacy, and Deception

by Amanda McAllister and Navin Ramalingam

A “deepfake” is an ultrarealistic fake video made with artificial intelligence software. The term is a portmanteau of the concept of machine “deep learning” and the word “fake.” Essentially, it is the end-product of a computer program “learning” the map of the target subject’s face, finding common ground between two faces, and stitching one face over the other in a video editing process. 

Manipulating video is not necessarily a recent invention. Hollywood has been doing it many years, such as when film effects were used to make Joseph Gordon-Levitt look like a young Bruce Willis in the film Looper, or the digital recreation of a young Carrie Fisher on actress Ingvild Deila for the Princess Leia cameo in Rogue One. Face-morphing features are also an essential part of multimedia messaging applications like Snapchat.

While the technology may not be inherently illegal or unethical, some manifestations of deepfakes do have the potential to be illegal, to create liability, to spread misinformation, or to violate the privacy of subjects of deepfakes.

For instance, people have been using facial recognition apps and deepfake technology to superimpose faces of well-known celebrities and ordinary people over that of actors in pornographic films or over nude photos to create nonconsensual pornography. The past year has seen several deepfake consumer apps being released permitting its users to create their own deepfakes, including one disturbing app that provided its users the opportunity to create nonconsensual pornography by “undress[ing] photos of women” and making them look “realistically nude.” Fortunately, this app has since been taken down.

In addition to privacy and consent violations, experts have also warned that deepfake technology could “spark violent outbreaks” by pushing false conspiracies through the sharing of fake videos. Some experts have even called deepfakes a potential threat to privacy, democracy and national security. Popular social media platforms have reacted to the backlash against deepfakes by deleting deepfake content and blocking its publishers

In June, U.S. Rep. Yvette Clarke introduced the DEEPFAKES Accountability Act. The bill aims to hold deepfake creators accountable by requiring watermarks and disclaimers identifying deepfakes as manipulated content. Earlier this year, Virgina updated its revenge porn law to include deepfakes so as to prohibit the creation, adaptation or modification of a video or picture in the context of revenge porn.

Creation of nonconsensual, realistic videos and the ease with which they can be created could have serious legal implications. So while some have called for new laws to combat deepfakes, others are confident victims can find legal recourse in already existing frameworks like copyright infringement, defamation, false light, intentional infliction of emotional distress, violation of privacy, and right of publicity. Some have also cautioned against laws prohibiting deepfakes that could infringe on First Amendment rights, such as satirical videos.

The advent of new and advanced technologies often coincides with exploitation of such technologies to harm others and a lagging legal system grappling with the far-reaching consequences of how such technologies are utilized. Deepfakes likely aren’t going anywhere, and the proliferation of technologies capable of identifying manipulated videos as such will be important to combating privacy violations, defamation, and the mass spread of misinformation.

No comments :

Post a Comment