In the war between Ukraine and Russia, we came to know about the cases where the deepfakes were used. They were used when the footage of Ukrainian President Volodymyr Zelensky was released in March.
In one of the footages, it was stated that the Ukrainian army is ready to lay down their arms and ammunitions amid the Russian army invasion.
However, the trickery of the manipulative footage failed and the army didn’t purchase it as it noticed the use of deepfakes.
And today if we consider this situation in the post-Covid world where remote job opportunities are a popular trend and digital technology joins hands with artificial intelligence (AI).
Now, the FBI Internet Complaint Centre (IC3) has also highlighted a similar trend in tech jobs where in people are found using deepfakes and pretending to be an alternative person who is performing job interviews for the remote locations.
But this is not the worst-case there is still more.
On June 28, the FBI made a public announcement where they stated that they have seen a rise in complaints and reports of the deepfakes cases and then who stole their Personally Identifiable Information (PII)).
Deepfakes consist of a video, an image or the manipulated recordings to mispresent such as someone else performing the task.
The reported positions also include information technology along with database, computer programming and software-related jobs.
Also, some of the job ads consist of access to customer PII, financial data, corporate IT databases and proprietary information and as a result, an individual faces unwanted incident and there is a threat to the company’s personal data.
It’s likely that people who choose to deepfake during interviews were unaware that the actions and lip movements captured on camera and audio don’t always match.
The FBI claims that sneezing, coughing, and other such behaviours do not coincide with the movie that is screened during the interviews.
Can We Consider Deepfakes As An Enemy?
Deepfakes, also known as fake audio and video content, was evaluated as the most significant AI criminal threat in a study that was published in Crime Science in 2020.
According to the study, people have a strong propensity to accept what they see and hear for themselves, which lends credence to the impressive images and sound.
In the future, it might be quite easy to discredit public personalities and make money by passing for others.
People may grow wary of such content as a result, which would be detrimental to society.
Digital crimes, in contrast to many traditional crimes, are often simple to distribute, copy, and even sell, allowing the marketing of criminal activity and the provision of crime as a service.
This suggests that thieves would be able to outsource the trickier parts of an AI-based crime, according to the first author, Dr Matthew Caldwell.
Follow The420.in on