Proving Authenticity, In A Deepfake World
Reality has a problem.
As if truth and clarity weren’t already hard enough to discern, computer-driven manipulation of digital representations make it increasingly easy for people and other machines to be fooled. The most talked-about versions of this are “deepfakes,” usually comic or slanderous forgeries of public figures driven by Artificial Intelligence manipulations. The confusions are proliferating, however, and are likely to be an issue for many businesses too.
The answer is simple, and achieving it may be a struggle of years: Reality needs a watermark.
Why should a business care if there’s a faked Internet video of Barack Obama saying things he never said, or Steve Buscemi’s face on Jennifer Lawrence’s body? (By the way, both viewable here, in an explanation of how these things are made.)
Imagine someone creating a deepfake of a CEO saying something criminal, however, and profiting from a consequent drop in the company’s value, either by shorting the stock or buying a competitor’s shares. Or bad actors interfering with a company’s supply chain, by falsifying communications between two or more parties. Then there’s the possible falsification of a person’s regulatory past (driving infractions, employment history) or medical records.
If digital deceit goes mainstream, it’s clearly likely to affect a lot more than the careers of a few celebrities and elected officials. We move through our complex modern lives with an exceptional amount of trust (that most drivers will avide by the rules of the road, that the ingredients on our food are what is in the package, that automatic debts and payments are made correctly) and we’d function far more slowly, if at all, if we had to verify every act taken on trust.
Thinking of this as a trust issue, it’s useful to see deepfakes, social media bot manipulation, or falsification of individuals or brands are the latest versions on one of society’s oldest deviences practices: Forgery.
Some of the first known coins were debased to gather valuable metals. False narratives of the New World as a place of ease and plenty enticed immigration, rewarding ship owners. More recently, copyrights have been violated and luxury goods forged to reward the forgers, at a cost to the original producers. And more important, for our purposes, at a cost to social trust.
The answer has typically been enforcement of reality (in capturing and punishing copyright violators) and raising the cost of falsification (by, say, printing money on unusual papers, or introducing watermarks.)
Both techniques of raising the risk are likely for reality forgers. In September, tech billionaire Mark Cuban suggested on Twitter there will be pins generating randomized light patterns, worn at live events to verifying what’s going on. In a recent paper, researchers at Stanford, the University of Toronto and Penn State describe building software that can spot imperfections in synthetic videos, as well as potential laws and policies that discourage a culture of faking.
“You could require the makers of recording devices to identify and encrypt every event. That way you’re protecting the source,” said Francois Chollet, an A.I. engineer at Google (our mutual employer, though neither one of us is speaking for the company here.) He warns that, “the human mind has evolved to work in a social environment and cooperative environment. It’s not built for an adversarial world. Now we have to adapt, either through regulation or new social norms”
Judging from recent experience, he adds, many people don’t seem concerned whether they’re consuming reality or its alteration, as long as it goes along with their biases.
In some ways there already are evolving social standards to accompany our technologies. Last year, Google demonstrated an intelligent agent called Duplex, that delivered a more easeful interaction with humans by using not just speech, but dimensions like pacing and intonation that made it seem more human. Within a couple of days the company added an introductory message, in which Duplex announced it was an intelligent agent.
The product was equally effective, and trust was not eroded by talk of people unknowingly interacting with machines. Several months later, California formalized this new territory of “bot disclosure” in a law. It will be interesting to see what the next several years of interactions with smart speakers like Amazon Alexa or Google Home will do to this evolving sensibility.