- Research
- 16/01/2020
Prof Talk | Deepfakes: do you still believe what you see?
Making a deepfake video is child's play these days. But whereas a realistic fake video seems funny at first - remember Obama calling Trump a ‘total dipshit’? - its consequences can be disruptive. Because where's the line between fake and reality? This has prompted Facebook since last week to ban deepfakes. Even so, we're still in new territory: seeing is no longer believing, stress TU/e experts Peter de With, Joaquin Vanschoren and Emily Sullivan.
Deepfake is a contraction of deep learning - the technology which allows machines to learn ‘independently’ using artificial neural networks carrying large amounts of data - and fake. A realistic fake video in other words, made using artificial intelligence. This kind of manipulated video is made by sticking a photo of a person's face over someone else's face and tweaking the moving images. It results in hilarious short films. Jim Carrey is suddenly acting in The Shining, Elon Musk is crawling around as a baby and Arjen Lubach, a Dutch comedian, morphs into Thierry Baudet (a male right-wing politician, King Willem-Alexander and Yvon Jaspers (a female TV presenter) all at the same time.
Deepfake technology is developing so quickly that now anyone can make one of these videos. Last year Samsung was already saying that it needed no more than a facial photo to construct a realistic face; and soon it will be easy for you to play the leading role in a music or film video thanks to the popular video app TikTok.
All reasonably innocent, but if the images of world leaders or opinion makers are tinkered with, people's trust can be harmed and there's a real risk of danger being done to democracy and ultimately to our freedom, warns Emily Sullivan. At the Department of Industrial Engineering and Innovation Sciences she is researching the ethical aspects of artificial intelligence. “We can no longer be entirely sure whether something is real or fake. We are already familiar with this notion thanks to Photoshop, but a video has even more impact. And given the speed with which information is shared on social media, something can escalate just like that.”
‘Be critical’
Despite this, most ‘amateur’ deepfakes can be recognized as such, believes Joaquin Vanschoren, assistant professor at the Data Mining group (W&I). A crash course: “Look closely whether the mouth movements are in synch with the spoken text. In the majority of amateur videos this is where you can spot errors. Look out for any discoloration and odd shadows on the face, often the result of different lighting in the different images. Look at the eyes, do they swivel with the face in a natural-looking way? And does the text being spoken match the body language? Be critical, I can't say this often enough. In addition, as new algorithms are developed we can also apply artificial intelligence to recognize and disable deepfakes at pixel level, a necessary step for the more professional deepfakes. And don't just have algorithms look at images, but also at the source - where is the video from, who has it been shared with.”
But these self-learning algorithms are also precisely the reason that deepfake videos are becoming ever more realistic and thus harder to detect. “This is a classic moving target,” says Peter de With, professor of Video Coding and Architectures (EE). “Both well-intentioned and malicious people have to keep moving forward in intelligence and quality if they want to stay one step ahead. For the time being, as scientists we are on the side of good. We can use artificial intelligence in a multidimensional way. For example, you can capture a person as a combination of physical posture, sound, and the way they move, giving them a unique footprint.”
The results of the surveillance study carried out by De With's group have helped enable this development. Human behavior at stations and the behavior of various road-users were analyzed in detail. De With: “There’s a great deal we can record. But what's particularly extraordinary is that we are seeing that you can base person recognition on the way someone walks. Everyone moves their shoulders and legs in a unique way. This means you can create software that tests whether the physical posture and movements belong to the person in question.”
Good detection is thus essential to combat the spread of deepfakes and in this respect, the Deepfake Detection Challenge launched last year by, among others, Microsoft, Google and Facebook is a smart move. As for Facebook's recent announcement that it was banning deepfakes from its platform with immediate effect, Sullivan has her doubts: “There is still too much confusion about what the ban will and won't apply to.” Sending a message like this can even have adverse consequences, she says. “This ban might lead people to believe that the platform has rid itself of misleading videos. But the reverse is true: paid political advertisements will still be posted and they can involve plenty of manipulation.”
Vanschoren, Sullivan and De With can't repeat it often enough: “Be critical about what you see and hear and if something daring is posted, check whether reliable, more traditional channels are also picking it up. The era of blindly believing is over.”
Discussion