Deepfake...Real Problem
- Kieren Sharma
- Dec 11, 2023
- 4 min read
In our latest episode, we tackled a pressing topic of deepfakes. This technology, which uses artificial intelligence to create hyper-realistic digital forgeries, is evolving rapidly and raising significant questions about trust and reality. While media coverage on deepfakes might seem to have quietened down, they are actually on the rise. Let’s unpack what that means.

What is a Deepfake?
The term “deepfake” originates from the deep-learning algorithms that power this technology. These algorithms, a form of generative AI, are designed to replicate and produce unreal versions of real things. A major concern is that deepfakes can be used to steal online identities and erode trust in digital media, including text, voices, videos, and images.
The age-old adage “seeing is believing” no longer holds true.
How are Deepfakes Made?
We discussed the two primary AI technologies used to create deepfakes:
Encoder-Decoders: These AI systems are trained on thousands of images or video frames of a person’s face. They learn to compress and reconstruct the face, enabling them to swap one face onto another in videos. This technique is widely used for face swapping.
Generative Adversarial Networks (GANs): GANs involve two AI systems in competition with one another—a generator (forger) and a discriminator (detective). The generator creates fake images, while the discriminator tries to identify whether they are real or fake. Through this iterative process, both systems improve, producing increasingly convincing forgeries. This method is now the leading approach for deepfakes.
The Rise of Deepfakes: A Brief History
Deepfakes first surfaced in 2017 on Reddit, gaining attention through videos of celebrities and politicians saying or doing things they never actually did. For example, there was a deepfake of Barack Obama making inappropriate comments about Donald Trump, and one of Mark Zuckerberg claiming to control the world.
Tools like DeepNude, which can remove clothing from photos, further highlighted the dangers. More recently, a deepfake of the Pope in a puffer jacket fooled many people. This shows not only how advanced the technology has become, but also how adept people are at using it to craft convincing narratives.

The Bad and the Ugly
Deepfakes have been exploited for malicious purposes, including:
Scams: Deepfake voices have been used to deceive individuals into transferring large sums of money, with one case resulting in a $35 million loss.
Misinformation: Deepfake images, such as those of Donald Trump being arrested, have blurred the line between reality and fiction. This erosion of trust can make people desensitised to future events.
Legal Manipulation: In child custody cases, deepfakes have been used to fabricate recordings of parents behaving abusively.
Pornography: A staggering 98% of deepfake videos are pornographic, with 99% of them targeting women, particularly celebrities. Shockingly, it now takes less than 25 minutes and less than £1 to create a 60-second deepfake pornographic video.
The Good Side of Deepfakes
While the risks are undeniable, deepfake technology also has positive applications:
Voice Restoration: People with conditions like ALS can use AI to recover their voices by training systems on past audio clips.
Awareness Campaigns: Deepfakes have been used to highlight important causes, such as malaria prevention, with David Beckham speaking in 27 different languages.
Challenging Bias: Deepfakes have been used to combat gender bias, such as mapping male faces onto female football players to showcase the excitement of women’s football.
Entertainment: Deepfakes allow actors’ voices to be translated into multiple languages, making films and series more accessible globally.
Detecting Deepfakes
As deepfake technology advances, so do detection methods:
AI Detection Tools: Companies are developing AI tools to identify deepfakes. For example, Intel’s real-time detector claims 96% accuracy, though its performance in real-world settings may be lower. Other techniques analyse blood flow in pixel data.
Watermarking: Invisible digital watermarks can be embedded into images, detectable only by AI.
Human Observation: Signs like blurriness, inconsistent lighting in the eyes, or unusual hand details (e.g., missing fingers) can indicate deepfakes.
Source Verification: Always check the source of an image or video. Reverse image searches on platforms like Google can help identify original content.
Legal and Social Implications
Governments and social media platforms are taking steps to regulate deepfakes:
Regulations: China mandates clear labelling of deepfake content, and the EU is working on similar policies.
Criminalisation: Sharing intimate deepfake images is now a crime in the UK and some US states, like Virginia.
Social Media Guidelines: Platforms such as YouTube, Instagram, and Reddit prohibit posting deepfakes without proper labelling.
Hollywood Strikes: Recent strikes highlighted actors’ concerns over deepfakes being used to replicate their likenesses without consent.
The Future of Deepfakes
The technology is advancing rapidly, with key trends including:
Text-to-Video: AI is developing the ability to generate full videos from text prompts. Although current outputs are short, this is expected to improve.
Synthetic Content: By 2026, experts predict that 90% of new online content will be synthetically generated.
What Can We Do?
To address the rise of deepfakes, we need to act now:
Educate Yourself: Learn how the technology works and how to spot deepfakes.
Critical Consumption: Assess the content you encounter online and verify sources before sharing.
Be Alert: Recognise the types of content that might convince you most, as these are often targeted.
Support Policy: Advocate for local governments to introduce legislation addressing deepfakes.
This is an ever-evolving landscape, and we’ll continue to cover these developments in future episodes.
If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!
Comentarios