ANL 29.16 Increased By ▲ 0.64 (2.24%)
ASC 16.59 Decreased By ▼ -0.09 (-0.54%)
ASL 25.11 Increased By ▲ 1.35 (5.68%)
AVN 97.12 Increased By ▲ 2.88 (3.06%)
BOP 9.52 Increased By ▲ 0.05 (0.53%)
BYCO 9.40 Decreased By ▼ -0.16 (-1.67%)
DGKC 112.20 Increased By ▲ 1.69 (1.53%)
EPCL 47.50 Increased By ▲ 0.05 (0.11%)
FCCL 21.50 Increased By ▲ 0.22 (1.03%)
FFBL 28.80 Increased By ▲ 0.83 (2.97%)
FFL 19.15 Decreased By ▼ -0.13 (-0.67%)
HASCOL 14.11 Decreased By ▼ -0.20 (-1.4%)
HUBC 86.79 Increased By ▲ 0.79 (0.92%)
HUMNL 7.31 No Change ▼ 0.00 (0%)
JSCL 32.37 Increased By ▲ 0.90 (2.86%)
KAPCO 40.80 Increased By ▲ 0.07 (0.17%)
KEL 4.15 Increased By ▲ 0.04 (0.97%)
LOTCHEM 16.10 Decreased By ▼ -0.11 (-0.68%)
MLCF 44.15 Increased By ▲ 1.13 (2.63%)
PAEL 39.99 Increased By ▲ 0.13 (0.33%)
PIBTL 13.10 Increased By ▲ 0.19 (1.47%)
POWER 11.72 Increased By ▲ 0.32 (2.81%)
PPL 94.00 Increased By ▲ 0.71 (0.76%)
PRL 23.80 No Change ▼ 0.00 (0%)
PTC 9.39 Increased By ▲ 0.09 (0.97%)
SILK 1.22 Increased By ▲ 0.02 (1.67%)
SNGP 44.72 Increased By ▲ 0.39 (0.88%)
TRG 114.70 Increased By ▲ 6.21 (5.72%)
UNITY 33.10 Decreased By ▼ -0.05 (-0.15%)
WTL 1.09 Decreased By ▼ -0.03 (-2.68%)
BR100 4,884 Increased By ▲ 28.55 (0.59%)
BR30 25,064 Increased By ▲ 340.11 (1.38%)
KSE100 46,169 Increased By ▲ 301.14 (0.66%)
KSE30 19,200 Increased By ▲ 138.87 (0.73%)

Deep Fakes

What are DeepFakes? Have you seen that video of Jim Carrey as Jack Torrance in “The Shining” or the Queen...
09 Jan 2021

What are DeepFakes?

Have you seen that video of Jim Carrey as Jack Torrance in “The Shining” or the Queen dancing and issuing a TikTok challenge, or even the one of Steve Buscemi with Jennifer Lawrence’s body? If yes, then you have seen a Deepfake.

What is a Deepfake

Deepfakes are a form of media where an existing video or image has been digitally altered, utilizing the power of AI and Machine Learning. Researchers had been working on the technology since the 90s, though the technology has improved tremendously since the past few years.

The term was adopted when a Reddit user “DeepFakes” shared digitally altered adult videos of celebrities in late 2017. He used Deep Learning to insert the faces of Celebrities in those videos, which made them look extremely realistic.

The technology has advanced to an extent that voice can also be manipulated to go along with a Deepfake video. Baidu (Chinese Tech giant) developed an AI algorithm which could clone a voice with only 3.7 seconds of voice data as a sample, where it used to require 30 minutes earlier. Fortunately, the result would not look realistic as it would require a lot more samples to get a better-quality output.

Deepfake algorithms utilize a ML technique called a deep neural network to examine the facial movements of one person, which synthesizes images of another person’s face and swaps one face for another in an image or a video. This has been done in movies for years but that required hundreds of hours of work by video editors and CGI experts, to get half-decent results.

The technology has gotten better by leaps and bounds so much that one can create believable fake videos using a powerful GPU and training data. All this requires is an understanding of deep learning technology and is hundreds of sample images of the two people whose faces you need to swap, and they can be fed into the algorithm. This does not even require any video editing skills.

The code is easily available online and is simple to use for anyone with a tech background. The caveat is that they need to collect and prepare the training data.

There is an example of a user was able to swap the faces of two late night talk show hosts by just going through a few videos of theirs and obtaining pictures of their faces. It did not take him more than 72 hours using a simple GPU to do his magic.

Perils of Deepfakes

It can create havoc in an era where disinformation campaigns are a norm. It is dangerous technology which can be weaponized and used for Revenge Porn, Political Fake News, Propaganda (utilizing sock puppets), and Court Proceedings, among others.

Deepfakes can be potentially threatening for our state institutions including the military as fake footage can be created to create discord among the masses. It can be used to frame political rivals or create misunderstandings among people. People are slowly getting familiar with the technology and the easy availability means that it potentially will not be long before someone comes up with a dangerous video.

Financial Fraud

There was this case in 2019 where the head of an Energy firm in the UK was scammed out of 200k GBP where the con artists made a deepfake audio of his boss asking for an urgent transfer of funds. Another incident took place at an unnamed tech firm, where an employee was approached. The scam was not successful as the audio sounded a bit robotic and so he flagged it to their legal department. The technology will keep getting better and this means that there will be many more cases of financial fraud using deepfake audio.

This is where it requires a lot more awareness on the part of the financial sector, by training employees and increasing security measures so that the likelihood of financial fraud is reduced. There is also the need for educating the public so that they do not fall for scams. When the technology gets more accessible, it will be easier for con artists to get their marks.

An example is the case of a lawyer in Philadelphia who was nearly scammed by someone impersonating his son claiming he was in trouble and needed bail money. The scammer sounded just like his son, used the same style of words and cadence. He had a narrow escape as he called his daughter in law who alerted her husband. It was when the son called and informed him that it was not him that he realized he had nearly been scammed.

Political Propaganda

Imagine a situation where a miscreant makes a deepfake audio of a political target where he or she makes comments which can get him into trouble. All they need is to train as much audio as they can get to ensure a higher quality and spread it via messaging apps and other social media platforms. Even if the audio is proven to be fake, it will have caused the damage as regular people would tend to believe what they hear.

An example is the video of Nancy Pelosi (Speaker of the House) where the audio was altered to make her look drunk. Despite the repeated requests, Facebook did not take any timely action to take the video down. It only helped the Republican supporters who were biased and chose to believe that it was true despite evidence of the original video not showing any slurring.

Revenge Porn

Revenge Porn is the most dangerous scenario here as it can be used by miscreants to target women, where their likeness can be used without their consent to create fake videos and ruin their lives. The usual victims are ex-girlfriends or ex-wives. There have been cases where they have been blackmailed to pay up or have their videos sent to family and friends. In some cases, videos were leaked to ruin the reputation of the women.

Rana Ayyub is a journalist based in Mumbai who was the victim of a Deepfake video, which was spread online after she criticized the Indian government in an article. An average person would not bother with the details and assume it was really her, even though it was clear that it was not her body. it was an extremely traumatizing ordeal for her as the law enforcement authorities did not cooperate with her even though members of the ruling party were openly sharing it.

According to Deeptrace labs, 96% of fake videos across the internet are of women, mostly celebrities, which are used in porn videos without their consent. They detected 14,678 deepfake videos across a number of streaming platforms and porn sites, which is a huge increase from 7,964 videos in December


Sockpuppets can also use Generative adversarial networks or (GAN images) to look believable so that they can spread propaganda on social networks. It utilizes neural networks to create realistic looking pictures using training data from a pool of pictures. The technology is still not that great as it allows for a sharp-eyed observer to detect them as they usually have some defects. However, with advances there will be a time when one will not be able to tell them apart using the naked eye. This still allows them to pass themselves off as real people as the pictures are unique and will not be noticed by the average person. We can expect this to get worse in the future as more propaganda campaigns will utilize this for their sockpuppets to spread misinformation online.

Limitations and learnings

The fake videos require a lot of time to get believable results. The longer there is training of data, the more realistic they will seem. Not only that but the electricity and GPU charges add to the total, unless you are running it on the cloud where you will only be charged for the GPU time.


There are ways of detecting fakes, but it requires alertness and the usage of AI to fight back. The US has a program to detect and even trace deepfakes under a DARPA program. Many universities, research teams and organizations are working on ways to fight back using AI. Microsoft, Facebook, Amazon, and other esteemed universities are also working on the Deepfake Detection Challenge so that open-source tools are available to fight this.

Experts believe that using the blockchain can be a way of helping as this would mean creating a public ledger of all videos which are created. It works as cryptography prevents tampering and any changes done, will be recorded in the ledger.

Cryptographic algorithms will also allow for hashes to be inserted so that they can be authenticated, and any fakes can be detected.

Forensics Lab

A good way to prevent this is to push for a proper forensics’ lab with a team of AI researchers and experts who can scrutinize videos to detect their authenticity. Research needs to be conducted on it or at least obtained from the public domain, as there is already some work being done on it. Keeping ahead of the curve can prove to be quite useful and can be used to diffuse any potential issues.


Make sure that your accounts are private so that you can only share pictures with trusted people. Even then you should limit what you share as it does not take much to create a fake looking video which an average person will not be able to tell apart. Conduct regular searches online to check and see if someone has uploaded images or videos of you.


The technology will keep improving which will lead to more issues. All we can do is to be prepared and fight back using the same tools which were used to create it in the first place. Vigilance and preventive measures go a long way.

Author Image

Shoaib Taimur

Shoaib Taimur is a data scientist/social media researcher based in Karachi.