Popular Posts

Editor'S Choice - 2024

Do not believe anyone: What threatens the neural network, making porno

Dmitry Kurkin

Fake porn videosin which the star of the film “Wonder Woman” Gal Gadot allegedly having sex with her stepbrother, which appeared on the Web this week, turned out to be not just a fake, but a fake generated by artificial intelligence.

The author of the video, an anonymous user of reddit with the nickname deepfakes, in an interview with Vice, confirmed that he used the neural network in order to “take a photo” of Gadot's face to the porn actress’s body. The fake, in general, is noticeable, and the deepfakes did not plan to make any money on it - he said, he just wanted to check how easy it would be to make it using AI. And he has bad news for us: it was very easy to make fake porn at home.

The problem, in general, is not in pornofeaks themselves: the demand for pictures and videos, where celebrities-like actors and actresses are having sex, was, is and will be with the fetishists and sensational lovers. On the external similarity with the stars in the porn industry, whole careers are being made - this phenomenon even gave rise to an outstanding headline of the yellow press: "Gordon Ramsay dwarf-porn dwarf discovered dead in a badger hole".

But if before production needed time, resources and some kind of skill, then a neural network, trained to weld faces to an image, greatly simplifies the task. Robots stick in, not a man, and robots get smarter from time to time, which means fakes will look more and more reliable. The deepfakes user clarified that he collected his AI software from the scrap elements found in the open source libraries, and took photos and videos in public drains and hosting such as YouTube. Simply put, all the tools for generating pornofeikov lay under his nose, not only with him, but among millions of people around the world.

And this, strictly speaking, not even news. Back in 2003, experts from the University of Oregon Health and Science warned that audio frauds are "easy to make and difficult to determine." By 2017, the prediction came true, and there were fake voice recordings generated by neural networks, which are very difficult to distinguish from real ones. Following the audio, the video was pulled up. The development of the staff of the University of Washington synchronizes the facial expressions of a person's face with his speech: in the video clip with Barack Obama laid out for example, only audio is present, the picture is generated by the program. Connect one with the other - and now there are ready fake videos in which, say, a well-known liberal politician "confesses" of his sympathies for Hitler.

It takes quite a bit of imagination to imagine how abuses such artificial intelligence can lead to - the chatbots of Microsoft, which the Twitter trolls have turned into racist in less than a day, will seem like a pretty prank against them. And of course, the technology of pornofeed is not primarily threatened by Gal Gadot, Taylor Swift or Jennifer Lawrence. They are sufficiently secured to hire an army of IT specialists who recognize a fake, and a fleet of lawyers who will sue anyone who tries to use these fakes for personal gain.

No, simple mortals will become the victims of new harmless algorithms. Digital fakes can be used for pornography, cyber-stalking, network harassment, blackmail and manipulation by people who are easily suggestible and do not particularly follow news from the field of advanced technologies ("Your daughter was videotaped, transfer money immediately, otherwise EVERYTHING will be seen"). And we are not necessarily talking about porn: fakes can also be launched into the Network for political propaganda and incitement to hatred.

"But this is monstrous!" Yes, monstrous, and it’s not the deepfakes coder that’s to blame. As hackers sometimes break into banking systems not to steal money, but to point the cyber defense department into security gaps, so it was invented by anonymous AI programmer that puts us in front of a fact: the era of digital counterfeits is not an anti-utopian horror story "Black Mirror", and the coming reality. It has not yet been comprehended by modern ethics (can pornophiles be considered as an invasion of the privacy of other people?), Nor by the law in force. But it is in her that we somehow have to live in the coming years. So, it is necessary to develop an antidote against such stuffing.

Collins dictionary curators called the term "fake news" the expression of 2017, thus emphasizing that the average user is addicted to clickbate and repost sensational headlines without wasting time checking the authenticity of the information. Meanwhile, fake news can influence the results of elections and referendums. And this is also a part of the new reality in which we already exist and in which we will need to learn to be less frivolous.

Social networks have already made the Internet a glass-walled house where millions of people can watch your life every day — even when you don’t want it. Now it’s obvious that there’s no door in this house: anyone can try to invade your personal space, using AI to screw your face on the pornorol or the movie “The Lord of the Rings.” Privacy is dead, but not only Big Brother is guilty of this in the form of special services and global corporations, but also ourselves.

The world of digital fakes is coming, but this is not a reason for panic, paranoia, refusal to use the Internet or the requirements to ban neural networks. This is a reason to think about network hygiene and new ethics, which will fix once and for all that using someone else's image to create a digital fake is vile and illegal. The times when people unconditionally believed everything they say on TV (even if they say that "Lenin was a mushroom") are fading away. In the same way, we gradually accustom ourselves to disbelieve everything that is written on the Internet.

Cover:Warp records

Watch the video: Who Is America? 2018. First Look. Sacha Baron Cohen SHOWTIME Series (November 2024).

Leave Your Comment