Artificial intelligence–Deepfakes – that’s what we should be concerned about
Deepfakes, i.e. fake videos made with artificial intelligence, are flooding the Internet. They are becoming easier and easier to manufacture. What are the three greatest dangers?
Any video today can be a lie. Deepfakes look like realistic recordings – but they are fake. Now anyone can use the appropriate software to create deceptively real-looking videos in which people’s faces are swapped, people say things they never said or do things they never did.
What are the implications for politics and society when video or audio recordings can no longer be taken as clear evidence of reality?
1. Deepfakes as a weapon against women
The Dutch company Deeptrace examined around 15,000 deepfake videos last year . 96 percent of the films were pornography – in most cases Hollywood actresses or famous pop singers were mounted in sex films. Also a problem: The singer Lena Meyer-Landrut, for example, has already been a victim of deepfake porn.
But private individuals can also be affected: In Australia, Noelle Martin was the victim of a deep fake. Someone took a photo of the then 17-year-old and put her face in porn pictures and videos. Noelle Martin fought back – with success. Together with other activists, she campaigned for the tightening of laws. Such deepfakes have been banned in Australia since 2018 and are punished with several years in prison.
Also interesting: Lacrosse technology atomic clock
2. Deepfakes can incite people
The potential for abuse is great, especially in social networks, where fake news is spreading rapidly. Martin Steinebach is a deepfake expert at the Fraunhofer Institute for Secure Information Technology in Darmstadt. He sees a high risk that deepfakes are used to support opinion-making, especially in some filter bubbles.
Steinebach does not believe, however, that deepfakes can trigger major social upheavals up to and including war between states: Hollywood has been able to plausibly manage imitations for a long time, just like secret services.
Martin Steinebach comes to the conclusion:
If it were that easy to throw the world into chaos or to stir up countries against one another, it would probably have already happened.
says Wiebke Loosen from the Hans Bredow Institute for Media Research in an interview with ZDFheute. Means: People lose the confidence that they can somehow tell the difference between reality and fake. The letter from the federal government states: “Deep fakes can weaken public trust in the fundamental authenticity of audio and video recordings and thus the credibility of publicly available information.”
Also interesting: Nasty sex Trojans target Android users
3. Reality can be dismissed as deep fake
The problem: If every video, every sound recording can be a lie, it becomes easier for the really guilty to dismiss the truth as a fake. And that has already happened: After an old sound recording of Donald Trump made it public, in which he made derogatory comments about women (“you can put your hand between their legs”), Trump initially apologized ruefully – only to later make sure it was genuine to cast doubt on the inclusion.
What are deepfakes?
Deepfakes are images and videos created with the help of artificial intelligence and a lot of computing effort. The term is made up of the English words “deep learning” and “fake” – i.e. machine learning with artificial intelligence and falsification. Correspondingly programmed neural networks can largely autonomously produce deceptively real-looking images and videos.
How are deepfakes produced?
Even laypeople can now produce deepfakes with free apps. And technology is making ever greater advances: Two years ago, artificial intelligence needed tens of thousands of person’s image files to produce good deepfakes, but now a few hundred are sufficient.
Last year, the Chinese app “Zao” caused a stir – it even takes a single portrait photo to cut the person’s face into video clips and well-known film scenes. The clips are created in a few seconds and look deceptively real .
— Allan Xia (@AllanXia) September 1, 2019