Manipulated images in political propaganda are nothing new. One of history’s most famous examples is a 1937 photograph of Joseph Stalin walking along the Moscow Canal with Nikolai Yezhov, the once-powerful head of the Soviet secret police who organized the “Great Purge.” By 1940, Yezhov fell victim to the grim system of his own making and was executed. The original photograph was doctored to remove him from the image and replace his figure with background scenery.
New technology has revitalized these old tricks. In an age of social media and artificial intelligence (AI), forged imagery has become all the more realistic – and all the more dangerous.
In a new study, researchers at University College Cork in Ireland took a close look at how people on the social media platform X (formerly Twitter) responded to deepfake content during the ongoing Russian-Ukrainian war. The main takeaway is that many people have lost their faith in the veracity of wartime media because of deepfakes, almost to the point where nothing can be trusted anymore.
“Researchers and commentators have long feared that deepfakes have the potential to undermine truth, spread misinformation, and undermine trust in the accuracy of news media. Deepfake videos could undermine what we know to be true when fake videos are believed to be authentic and vice versa,” Dr Conor Linehan, study author from University College Cork’s School of Applied Psychology, said in a statement.
Deepfakes are audiovisual media, often of people, which have been manipulated using AI to create a false impression. A common technique involves “pasting” the image of someone’s face (let’s say a famous politician) onto the video of someone else’s body. This allows the producer to create the impression that the politician is saying or doing whatever they please, like a hyper-realistic virtual puppet.