A series of videos called “deepfakes,” made using technology that allows users to digitally superimpose a person’s face onto someone else’s body, has sparked discussion about how they will affect the future credibility of media outlets. The propagation of such videos has concerned Dartmouth professors, including computer science professor Hany Farid, who attended the Defense Advanced Research Projects Agency media forensics program meeting in January.
Software called FakeApp, released in January, makes it easy for people to create these “deepfake” videos. The software allows users to choose a single video that they would like to alter in addition to a number of images of the individual they want to see superimposed onto the person in the video. The software subsequently goes through the image collection, synthesizing the pictures to create a face corresponding to that in each frame of the original clip. This process continues until the new face is successfully placed into the video.
Although the popularity of deepfakes has concerned for many people, the technology’s capabilities are not new, according to Farid, who specializes in digital forensics. However, deepfakes have become relatively easy for amateurs to create, dramatically increasing the pool of people who can create fake videos.
“It’s extremely simple [to use],” said Catalin Grigoras, director of the National Center for Media Forensics at University of Colorado Denver. “Anybody can just watch the presentation on YouTube.”
While some of the videos made using FakeApp have been innocuous — superimposing actor Nicolas Cage into a James Bond film for example — others have used this technology to generate controversial videos, including pornography featuring celebrities’ faces without their permission. A forum on the website Reddit that shared deepfakes, including pornographic videos, was banned by the site earlier this month for hosting sexual images of people without their consent.
“I’m really offended by this pornography,” Farid said. “There should be pressure to not allow people to do this — this is incredibly disrespectful to the women involved.”
Farid encouraged people to stop and think about what they are viewing to prevent the spread of manipulated videos.
“People are far too quick to click the ‘like’ button, to click the ‘share’ button, to click the ‘retweet’ button without even reading the article,” he said.
Government professor Brendan Nyhan, an expert on political misconceptions and conspiracy theories, similarly advised internet users to be “more careful about what [they] share and take down any information that [they] later learn to be false.”
Because of the program’s technological power, Farid and Nyhan fear that people will no longer be able to differentiate between genuine and manipulated footage. Nyhan added that he worries that the videos could make people more wary in trusting mainstream, legitimate news sources.
“This is a potentially explosive kind of misinformation, one that could be used to steer specific individuals in a different way than maybe we’ve seen before,” Nyhan said.
Nyhan said he is worried that people may exploit the technology to further their political agendas. With the role of fake news in the 2016 election, it is not improbable that fraudulent videos will come into play in the future.
“I do think that these tools could and are likely to be abused, and are likely to be abused by people who have some nefarious political goal there,” Nyhan said.
To combat the potential ramifications of such rapid technological innovation, Farid advocated for the establishment of a cyber-ethics panel at the federal government level, or possibly at the United Nations. According to Farid, the panel should be responsible for thinking about issues that could arise in the long-term and putting reasonable safeguards in place to avoid the exploitation of the technology.
Farid asserted that the general public will not be the only ones who will find it difficult to discern the truth. In legal proceedings, digital evidence is frequently used. However, with the propagation of fake videos, courts may no longer be able to trust electronic proof, he said. Similarly, media outlets that welcome contributions by citizen journalists will not be able to rely upon their photos, he said.
“How does the media trust that the photos they are getting from anything — natural disasters, conflicts, Arab Spring, riots — how do they trust that those are real or not?” Farid said.
The average internet user might not be able to recognize that a video has been altered. In the time it takes for a forensics expert to identify the falsified video, multiple viewers may have already watched and shared the clip.
“This can be a critical timeframe,” Grigoras said. “When people only see a video and there’s nobody to prove that it was tampered with.”
Despite deepfakes’ threats to the cyber world, Grigoras advises people to be aware but not afraid. Farid echoed Grigoras’ sentiment.
“I think we should take what is happening,” Farid said. “And what has been happening for the last few years and start taking a long hard look about how we consume and share digital content online.”
Berit is a freshman from Ashburn, Virginia. She is thinking about pursuing a Computer Science or Government major, and decided to join The D because of her love of writing and interest in other people’s stories. In addition to writing for The D, Berit enjoys swimming, hiking, and trying new food.