Deepfake technology is making it harder to tell whether some news you see and hear on the internet is real or not.
What are deepfakes? Deepfake technology is an evolving form of artificial intelligence used for online scams that’s adept at making you believe certain media is real, when in fact, it’s a compilation of doctored images and audio designed to fool you. A surge in what’s known as “fake news” shows how deepfake videos can trick audiences into believing made-up stories.
In this article, you’ll learn what deepfakes are, how they work, the inherent threats, and how to help spot this technology.
What is deepfake?
The term deepfake melds two words: deep and fake. It combines the concept of a machine or deep learning with something that isn’t real.
Deepfakes are artificial images and sounds put together with machine-learning algorithms. A deepfake creator uses deepfake technology to manipulate media and replace a real person’s appearance, voice, or both with similar artificial likenesses or voices.
You can think of deepfake technology as an advanced form of photo-editing software that makes it easy to alter images.
But deepfake technology goes a lot further in how it manipulates visual and audio content. For instance, it can create people who don’t exist. Or it can make it appear to show real people saying and doing things they didn’t say or do.
As a result, deepfake technology can be used as a tool to spread misinformation.
What are deepfakes used for?
A deepfake seeks to deceive viewers with manipulated, fake content. Its creator wants you to believe something was said or had happened that never actually occurred, often to spread misinformation and for other malicious purposes.
What’s the point? The movie industry has used this type of technology for special effects and animations. But deepfake technology is now being used for nefarious purposes, including these.
- Scams and hoaxes.
- Celebrity pornography.
- Election manipulation.
- Social engineering.
- Automated disinformation attacks.
- Identity theft and financial fraud.
Among the possible risks, online deepfakes scams can threaten cybersecurity, political elections, individual and corporate finances, reputations, and more. This malintent and misuse can play out in scams against individuals and companies, including on social media.
Companies are concerned about several scams that rely on deepfake technology, including these:
- Supercharging scams where deepfake audio is used to pretend the person on the other line is a higher-up, such as a CEO asking an employee to send money.
- Extortion scams.
- Identity theft where deepfake technology is used to commit crimes like financial fraud.
Many of these scams rely on an audio deepfake. Audio deepfakes create what are known as “voice skins” or “clones” that enable them to pose as a prominent figure. If you believe that voice on the other line is a partner or client asking for money, it’s a good idea to do your due diligence. It could be a scam.
Social media manipulation
Social media posts supported by convincing manipulations have the potential to misguide and inflame the internet-connected public. Deepfakes provide the media that help fake news appear real.
Deepfakes are used on social media platforms, often to produce strong reactions. Consider a Twitter profile that’s volatile, aiming for all things political and making outrageous comments to create controversy. Is the profile connected to a real person?
Maybe not. The profile picture you see on that Twitter account could have been created from scratch. It may not belong to a real person. If so, those convincing videos they’re sharing on Twitter likely aren’t real either.
Social media platforms like Twitter and Facebook have banned the use of these nefarious types of deepfakes.
How was deepfake technology created?
The term deepfake originated in 2017, when an anonymous Reddit user called himself “Deepfakes.”1 This Reddit user manipulated Google’s open-source, deep-learning technology to create and post manipulated pornographic Videos.
The videos were doctored with a technique known as face-swapping. The user “Deepfakes” replaced real faces with celebrity faces.
How are deepfakes made?
Deepfakes can be created in more than one way.
One system is known as a Generative Adversarial Network, or GAN, used for face generation. It produces faces that otherwise don’t exist. GAN uses two separate neural networks — or a set of algorithms designed to recognize patterns — that work together by training themselves to learn the characteristics of authentic images so they can produce convincing fake ones.
The two networks engage in a complex interplay that interprets data by labelling, clustering, and classifying. One network generates the images, while the other network learns how to distinguish fake from original photos. The algorithm developed can then train itself on pictures of a real person to generate faked photos of that real person — and turn those photos into a convincing video.
Another system is an artificial intelligence (AI) algorithm known as an encoder. Encoders are used in face-swapping or face-replacement technology. First, you run thousands of face shots of two people through the encoder to find similarities between the two images. Then, a second AI algorithm, or decoder, retrieves the face images and swaps them. A person’s real face can be superimposed on another person’s body.
How many pictures do you need for a deepfake?
Creating a convincing deepfake face-swap video may require thousands of face shots to perform the encoding and decoding noted in the section above.
To produce a person that looks real, you also need images that display a wide range of characteristics like facial expressions and angles, along with the proper lighting. That’s why celebrities or public figures are good subjects for creating deepfakes. Often, there are numerous celebrity images on the internet that can be used.
Software for creating deepfakes has required large data sets, but new technology may make creating deepfake videos easier. For example, using AI, an Indian-origin developer based in San Fransisco has developed software where you merely have to type in the specifics of a face you require. The engine will then generate different facial images that look real but are, in fact, non-existing.2
What software technology is used to create high-quality deepfakes?
You can generate deepfakes in various ways. Computing power is essential. For instance, most deepfakes are created on high-end desktops, not standard computers.
Newer automated computer graphics and machine-learning systems enable deepfakes to be made more quickly and cheaply. The software technology mentioned above is one example of how new methods are fostering Speed.
The types of software used to generate deepfakes include open-source Python software such as Faceswap and DeepFaceLab. Faceswap is free, open-source, multi-platform software. It can run on Windows, macOS, and Linux. DeepFaceLab is an open-source that also enables face-swapping.
How to spot deepfakes?
Is it possible to spot a deepfake video? Poorly made deepfake videos may be easy to identify, but higher-quality deepfakes can be challenging. Continuous advances in technology make detection more difficult.
Specific telltale characteristics can help give away deepfake videos, including these:
- Unnatural eye movement.
- A lack of blinking.
- Unnatural facial expressions.
- Facial morphing — a simple stitch of one image over another.
- Unnatural body shape.
- Synthetic hair.
- Abnormal skin colours.
- Awkward head and body positioning.
- Inconsistent head positions.
- Odd lighting or discolouration.
- Bad lip-syncing.
- Robotic-sounding voices.
- Digital background noise.
- Blurry or misaligned visuals.
Researchers are developing technology that can help identify deepfakes. For example, a group of four students known as “Team Detectd” from Raisoni College of Engineering in Nagpur has developed AI and computational neural networks that can detect manipulated audios, images, and videos with over 90% accuracy. Until the reported date, the team have had precisely identified more than 7000 deepfake videos with 96% accuracy.3
Organizations also are incentivizing solutions for deepfake detection. One example is the DFDC or Deepfake Detection Challenge. Major companies kicked it off to help innovation in deepfake detection technologies. To achieve faster solutions by sharing work, the DFDC shares a data set of 124,000 videos that feature eight algorithms for facial modification.4
Deepfakes in politics
In the political arena, one example of deepfake is a 2020 video of the Indian Prime Minister Narendra Modi singing a song along with Rahul Gandhi, a member of the parliament.5 Neither actually sang the song, but it looked and sounded just like them.
On February 7, 2020, India saw the first use of deepfakes in a political campaign.6 A video of Bharatiya Janata Party (BJP)’s president Manoj Tiwari went viral on WhatsApp, where he criticized his opponents using different languages. The video fomented political controversy around the nation since more than 15 million people viewed it.
Although there are no explicit laws against deepfakes in India, the government might implement specific punishments for using the technology to alter someone’s character(defamation).7 For instance, a 20-year-old student from Mumbai was arrested for photoshopping a young girl’s face into pornographic content and threatening to share it online.8
Deepfakes can bolster fake news. They are a significant security concern, especially as the advancing technologies are making it easier for more users to manoeuvre them.
What are shallowfakes?
Shallowfakes are made by freezing frames and slowing down video with simple video-editing software. They don’t rely on algorithms or deep-learning systems.
Shallowfakes can spread misinformation, mainly depicting the fake vulgarity of innocent victims. An Indian Journalist, Rana Ayyub, whose shallowfake porn video was leaked online, can be taken as an example.9
What is being done to combat deepfakes?
On a global scale, deepfakes can create problems through mass disinformation. On a personal level, deepfakes can lead to bullying, reputational and financial damage, and identity theft.
India and other countries have deepfakes on their radar, but one big question is whether deepfakes are illegal.
Here are some of the way the Indian government and companies are trying to detect, combat, and protect against deepfakes with new technology, rules, and legislation:
- Social media rules. Social media platforms like Twitter have policies that outlaw deepfake technology. Twitter’s policies involve tagging any deepfake videos that aren’t removed. Similarly, Facebook has a ban policy for using vulgar, disinformed story spreading, graphic violence content, and hate speech.
- Research lab technologies. Research labs use watermarks and blockchain technologies to detect deepfake technology, but technology designed to outsmart deepfake detectors is constantly evolving.
- Filtering programs. Programs like Deeptrace are helping to provide protection. Deeptrace is a combination of antivirus and spam filters that monitor incoming media and quarantine suspicious content.
- Corporate best practices. Companies are preparing themselves with consistent communication structures and distribution plans. The planning includes implementing centralized monitoring and reporting, along with effective detection practices.
- Specific groups. While the government is making efforts to combat nefarious uses of deepfake technology with pending bills, individual groups, one like mentioned before (Team Detectd), are instead progressing in their work to commonize the identification of deepfakes that are contaminating the online world rapidly.
Currently, in India, there are no legal rules against using deepfake technology. However, specific laws can be addressed for misusing the tech. Some similar provisions are listed below:
- Copyright Violation
- Cyber felonies
- Disrupting Right to Privacy
What can you do about deepfakes?
Here are some ideas to help you protect against deepfakes on a personal level.
If you’re watching a video online, be sure it’s from a reputable source before believing its claims or sharing it with others.
When you get a call from someone — your boss, for instance — make sure the person on the other end is really your boss before taking action.
Don’t believe everything you see and hear on the web. If the media strikes you as unbelievable, it may be.