
Trust, Society, and Manipulated Video
FILM AND NEW MEDIA
Trust, Society, and Manipulated Video

Karno Dasgupta
September 2019
“A total and complete dipshit.” That’s what Barack Obama seemingly called President Trump when he appeared in a short clip for BuzzFeed Video’s YouTube channel in April 2018. Here was a sharp gibe, uncharacteristic of the ex-President. Except, as Jordan Peele’s appearance soon revealed, it wasn’t Obama who was speaking. Instead, a vocal impersonation had been layered over a computer-generated image of his face – one which moved to eerily mimic the words being said. Here was a new genre of fabricated video, where pre-existing visuals of a targeted face are inputted into a create a realistic reproduction that can be manipulated for the creator’s ends. A deepfake.
While some people noticed that something was off about the way Obama looked and sounded, many others were initially fooled or at least confused by the clip. The incident demonstrates how good the generative algorithmic subset of the artificial intelligence used to create a deepfake is. And, scarily, the machine learning processes that help synthesize them are just getting better and more accessible, progressively requiring fewer source materials and extending to manufactured voices too. Every day, it becomes easier to forge people. And these forgeries can damage individual lives (as in cases of targeted pornography), but can also pose risks on the global-scale by undermining public trust in information sources generally considered to be reliable.
Jordan Peele’s Obama deepfake on BuzzFeed Video.
Today, we’re at a point where experts are playing catch-up to identify what’s real and what’s not. When the Gabonese President appeared to the public in a video-address to quell reports about his ill-health on New Year’s Day 2019, for example, many citizens and critics questioned its authenticity. A definitive answer on whether or not it is a deepfake remains elusive. And yet, its uncertain origins spurred a failed coup a week later. In any other time in history, a recording would be undeniable proof of something, just as the photograph had promised at its inception. But human innovation has transformed yet another medium of communication for the worse. What Photoshop did to photography, deepfakes do to film. And suddenly, a source and sphere of information is heavily compromised.
No doubt, deepfakes are a tremendous feat of human intelligence, showing how much of what we perceive can be influenced by others. Their rapid proliferation also represents the wonders of a democratized digital world. However, in developers’ quest to enhance our ability to control older audio-visual technologies with more sophisticated tools, they create dangerous, false information that threatens society. This is because people either believe a fabricated product and are influenced negatively by it, or they don’t and turn skeptical towards all products, losing their trust in the institution of production itself. Essentially, this maps onto the idea that people make decisions based on some collection of information, but deepfakes delegitimize a fundamental mode of information-collection.
There is a strong connection between this and the value of trust in our lives. In Trust in Society, Karen S. Cook notes that “trust plays a significant role in the functioning of social groups and societies,” and also links trust to order and stability. Trust is foundational to relationships within and with an organized collective. In a sense, we need to trust people and institutions to both preserve ourselves, and the democratic society we inhabit. If lost, instability and a loss of connections ensue. For example, in a simplified hypothetical, if you called the police while your house was getting burgled and they did not show up you would suddenly doubt the institution that promises you safety in a city. Repeated failures would make you lose faith in the promise of security implicit in many societies today. You might move to a different location, and definitely buy yourself a weapon for protection. In short, you would try disentangling from one area of your interaction with society.
Now, as people who turn to the media to locate ourselves in a social space, we are strongly influenced by the books we read, the songs we hear, and the news we view. A newspaper is a good source of information about a politician’s opinions, a voice recording of her is better, but a live feed of her saying something is closest to the best basis for trusting that she actually said it. Why? Because our eyes and ears combine to form the primary points of input for our experiences, and short of actually interacting with people face-to-face, videos are the best simulations of “being there.” That is not to say that skepticism and critical thinking are not important to being educated consumers – we should question the truth and implications of a politician’s position. But, historically, we could distrust an equivocator without qualms about the way we heard her hedging. Our faith in the medium remained.
The moment we reach manipulation technologies like deepfakes, however, a gateway into a world where no one can ever know if someone said something or not opens up. Suddenly, our trust in social institutions of communication begins to evaporate.
Hence, lawmakers in America are scrambling to regulate deepfake technologies. Why, inductively, notable figures across science and programming are worrying about the numerous ways artificial intelligence could harm society. Because they have the potential to fundamentally alter our experience of reality on an unprecedented scale, with unbelievable speed – in fact, the term “deepfake” is only a few years old and the technology has only existed for five years. And the fear everyone has of progress pursued without conscience or broader consideration is amplified in the interconnected present, where rapid, mass consequences arise from limited, specialist development. It is the same fear that made Plato distrust the memory-weakening potential for writing in Phaedrus or the Luddites destroy the job-stealing industrial machines – that of the price of progress. For technology to change lives, it must bury the way life was once lived.
Deepfakes, on a philosophical level, destabilize the trust in truth essential for us to know things or even believe in our ability to know things. They give people the power to make anyone say anything. And if anyone can say anything, then we might as well say nothing at all – or stop listening, at the least. Because a functioning society needs people to trust people. It needs some truth. And that is getting harder to find with each passing day. In this sadder sense, deepfakes are a natural extension of the post-truth world of alternative facts, a rabbit-hole that goes all the way down to artificially intelligent robots that can look like your favorite pop star or a notorious demagogue, spewing hate or inciting violence in-person. A sorry sight indeed. Regardless, it is unlikely technology will slow down. Between progress and the past, we only look back, never turn.
In such tumultuous times, the only way to resist a breakdown of social order is to build defenses. Governments should incentivize the development of programs that identify deepfakes, the masses must be educated about the existing misinformation threat, and corporations must invest in checks that filter potential fake content before it goes live. The end goal is a practicable ethical framework that preserves people’s faith in the institutions of communication. We must fight the good fight, or risk losing it all.
Karno Dasgupta is a student at NYUAD, majoring in Literature and Creative Writing.