Connect with us

Economic Fraud

THE DEEPFAKE THREAT

Published

on

THE DEEPFAKE THREAT

We are already in the conundrum of Fake News and also the humungous fake content being created by the What’sApp University, Deepfake threat adds a totally new dimension to the whole problem. Few recent Deepfake videos have shown what they can deliver. Seeing is believing, for the first time in human history, might not to be true. Deepfakes swap celebrities’ faces into porn videos and put words in politicians’ mouths, but they can do a lot worse too. Creating deliberate falsehoods is the name of the game and then spread successfully under the guise of truth. The fact checkers get into the act, but by that time, it’s already too late. Damage has already been done.
Deepfakes to set the record straight are fake videos or audio recordings which look and sound as the real thing. It would naturally pass off as genuine. Once the technical handiwork of Hollywood special effect studios and intelligence agencies creating propaganda; CIA or GCHQ’s JTRIG, today it is right there in everyone’s hand. Anyone can download a deepfake software and create convincing videos in spare time. It is easy to create a deepfake of an emergency alert warning an attack being imminent or destroy someone’s marriage by a fake sex video. This can be used in elections given the no holds barred pitched battles.
Creating deepfakes happen on the technical concept of generative adversarial networks (GANs). This is a two machine learning model, to battle it out, to perfect the product. ”One ML model trains on a data set and then creates video forgeries, while the other attempts to detect the forgeries. The forger fakes until the other ML model can’t detect the forgery.” The quantum of training data is directly proportional to the quality of fake videos that can be created. Machine learning operates on the quantum of data on offer for training. This is the reason why videos of former Presidents and Hollywood celebrities have been frequently used in the first generation of deepfakes.
This technology has the capability to completely blur the distinction between the real and the fake. Detecting deepfakes is a hard problem. ”GANs that generate deepfakes are getting better all the time, and soon we will have to rely on digital forensics to detect deepfakes — if we can, in fact, detect them at all.” Given this hard nut issue at hand, DARPA is hugely funding researchers to find better ways to authenticate videos. GANs can themselves be trained to learn how to evade such forensics. The battle will go in whose favour remains a billion dollar question today.
DEEPFAKES ARE THE NUKES OF THE FAKE DIGITAL ECOSYSTEM BEING CREATED.
Sanjay Sahay
Have a nice evening.
Continue Reading