The monitor is a weekly column dedicated to everything that happens in the WIRED culture world, from movies to memes, from TV to Twitter.
Future generations will recognize the vibe shift. It happened last weekend, then all of a sudden social media feeds filled with images of Pope Francis, typically a pious and simple fellow, looking like a boss in a sleek white puffer coat. It was an instant meme, an LOL in a sea of bad news. It wasn’t real either. Someone created the image using the artificial intelligence mid-trip tool. But it fooled a lot of people — so many that news outlets started call it “one of the first examples of large-scale disinformation emerging from artificial intelligence.”
Just typing that sentence feels spooky. Like the first time you see someone in a red cloak The Handmaid’s Tale. Not that this portends dystopia. After all, it was just one image of the fly looking like a pope. But what if it was an image declaring it to be a battlefield in the Ukraine war? Or that President Biden is calling some sort of secret meeting? The capabilities of AI generating that kind of misinformation are daunting.
Dozens of people fall for a disaster deepfake by Volodymyr Zelensky of course takes a bit more effort than tricking them with a crazy picture of a pope. as Charlie Warzel pointed out in The Atlantic Ocean this week everyone is using “different heuristics to get to the truth,” and it’s easier to believe that Pope Francis would wear a puffer than, say, those AI images of the former president Donald Trump is arrested are real. So it’s not hard to see why so many just saw them, giggled, and kept scrolling without questioning their authenticity.
But this does set a troubling precedent. The creator of the image of the Pope’s coat did not try to mislead anyone. In fact, he told BuzzFeed News he was just tripping over magic mushrooms and trying to come up with funny images. But what if it was part of a disinformation campaign? Much AI-generated content is already so clean that it’s hard for human eyes and ears to trace its origins.
Viewers probably never would have known that Anthony Bourdain’s voice was faked in the documentary road runner if director Morgan Neville hadn’t told The New Yorkers. Deepfakes are already being used as political instruments. Now skeptics can turn to trusted news sources if they suspect an image is fake, but trust in the news media has already approaching depth records. If someone can now generate an image of anything, and confidence in a source who could debunk that image has fallen to an all-time low, who will believe their lying eyes?
A few days after AI-generated images of Pope Francis went viral, the pope was taken to a hospital in Rome for a respiratory infection. He’s been improving since then, but as that (real) news spread, it got a little lost among the stories about the fake image. The pope was popular for two very different reasons, and at first glance it was hard to determine which was more important.
The age of social media has transformed the Very Online into some pretty good sleuths. Skepticism reigns. But so are conspiracy theories. Beyond the post-truth era is a time when compelling images, text and even video can be generated from scratch. One of the great promises of the Internet was that anyone could broadcast information to a much larger audience than before. For years, the liars were easier to spot: bad URLs, crappy photoshop, typos – all of these things betrayed the villains. AI can iron out their mistakes. I’m not Chicken Little, but maybe I haven’t been fooled by an image of the falling sky yet.