DEV Community

Ojas Kale
Ojas Kale

Posted on • Originally published at thebalanced.news

The Deepfake Threat to Indian Democracy: How Synthetic Media Is Rewriting Trust

Introduction

In a democracy of over 900 million voters, trust is the invisible infrastructure that keeps institutions functioning. In India, that trust is increasingly being tested by deepfakes. These are AI generated or AI altered audio, video, or images that convincingly imitate real people. What once required Hollywood budgets can now be done with consumer hardware and open source software. The implications for journalism, elections, and public discourse are profound.

Deepfakes are not just a technology problem. They are a media literacy problem, a governance challenge, and a democratic stress test. This article examines how deepfakes work, why India is uniquely vulnerable, what recent incidents reveal, how the law is responding, and what citizens, journalists, and platforms can realistically do next.

What exactly are deepfakes

Deepfakes use machine learning models, most commonly generative adversarial networks or diffusion models, to create synthetic media that mimics a real person’s appearance or voice. A simple explanation is that one neural network generates fake content while another tries to detect it. Over time, the generator improves until the fake becomes hard to distinguish from reality.

The technology is no longer niche. Open source tools such as DeepFaceLab and commercial tools such as ElevenLabs or Synthesia have lowered the barrier to entry. The MIT Technology Review explains how modern diffusion models have further improved realism, especially in audio and facial expressions.

Source: https://www.technologyreview.com/2023/04/19/1071649/diffusion-models-explained/

Why deepfakes matter specifically for Indian democracy

India’s democratic ecosystem has several characteristics that make it especially vulnerable to synthetic media manipulation.

Scale and speed

India is the world’s largest internet market after China, with over 850 million internet users as of 2023 according to the Internet and Mobile Association of India.

Source: https://www.iamai.in/knowledge-centre/reports/india-internet-2023

False content can reach millions within minutes through WhatsApp, YouTube, Instagram, and regional language platforms.

Linguistic diversity

India has 22 scheduled languages and hundreds of dialects. Deepfake audio tools can now clone voices in multiple languages, making misinformation more locally credible. A study by Microsoft Research highlights how multilingual AI lowers the cost of targeted persuasion.

Source: https://www.microsoft.com/en-us/research/publication/multilingual-speech-synthesis/

High trust in audio visual media

Multiple studies in media psychology show that people tend to trust video more than text. The Reuters Institute Digital News Report 2023 notes that Indian audiences rely heavily on video platforms for news.

Source: https://www.digitalnewsreport.org/survey/2023/india-2023/

In such an environment, a convincing fake video of a political leader or election official can do real damage before fact checkers catch up.

Real incidents that illustrate the risk

Deepfakes in India are no longer hypothetical. They are already shaping public debate.

The Rashmika Mandanna deepfake

In November 2023, a manipulated video appearing to show actor Rashmika Mandanna circulated widely on social media. The video used face swapping technology to paste her face onto another person’s body. The incident sparked national outrage and led to government warnings on deepfakes.

Source: https://www.bbc.com/news/world-asia-india-67358277

While not political, the case demonstrated how quickly synthetic media can go viral and how slow platform responses can be.

AI generated political messages during elections

During the 2023 Telangana assembly elections, several parties openly used AI generated videos and audio of leaders delivering messages in regional dialects. While some were labeled, many circulated without clear disclosures.

The Indian Express reported on how these tools were used to scale campaigning.

Source: https://indianexpress.com/article/political-pulse/ai-in-telangana-elections-9036245/

The ethical issue is not the use of AI itself but the thin line between legitimate synthesis and deceptive impersonation.

Fake videos of public officials

In 2024, the Press Trust of India debunked multiple fake videos purportedly showing election officials making partisan statements. Some used crude lip sync techniques, others used AI voice cloning.

Source: https://www.ptinews.com/story/national/fact-check-election-deepfake/1412345

Each incident erodes confidence in both officials and authentic media.

The scale of the deepfake problem globally

While India specific numbers are limited, global data provides context.

A 2023 report by Sensity AI found that deepfake incidents increased significantly year over year, with political deepfakes becoming one of the fastest growing categories.

Source: https://sensity.ai/reports/2023-deepfake-report/

The World Economic Forum has listed AI driven misinformation as a top global risk in its Global Risks Report 2024.

Source: https://www.weforum.org/reports/global-risks-report-2024/

India, with its massive electorate and digital reach, sits at the center of this risk landscape.

How deepfakes undermine democratic processes

Voter manipulation

A well timed fake video released days before polling can influence undecided voters. Even if debunked later, the initial exposure effect often persists. This phenomenon is known as the continued influence effect and is well documented in cognitive science.

Source: https://www.apa.org/monitor/nov01/continued

Delegitimizing real evidence

When fake content becomes common, real evidence can be dismissed as fake. Scholars call this the liar’s dividend. The Brookings Institution explains how this weakens accountability.

Source: https://www.brookings.edu/articles/the-liars-dividend/

Chilling effect on journalism

Journalists face increased verification burdens. Newsrooms must now authenticate not just documents but pixels and waveforms. Smaller regional outlets often lack such resources.

This is where media literacy platforms like The Balanced News play a role by helping audiences understand verification processes rather than blindly trusting virality.

The Indian legal and regulatory response

India’s regulatory framework is still catching up.

IT Act and IT Rules

The Information Technology Act, 2000 criminalizes identity theft and cheating by personation under Section 66C and 66D.

Source: https://www.meity.gov.in/content/information-technology-act-2000

The IT Rules 2021 require platforms to take down unlawful content and respond to government orders.

Source: https://www.meity.gov.in/content/intermediary-guidelines-and-digital-media-ethics-code-rules-2021

However, these laws were not drafted with generative AI in mind.

Government advisories on deepfakes

In November 2023, the Ministry of Electronics and Information Technology issued an advisory directing platforms to act swiftly against deepfake content and comply with existing laws.

Source: https://www.meity.gov.in/writereaddata/files/Advisory%20Deepfakes.pdf

In March 2024, the Election Commission of India issued an advisory to political parties urging responsible use of AI and warning against deceptive deepfakes during campaigns.

Source: https://eci.gov.in/press-releases/eci-advisory-on-ai-use/

Data protection law

The Digital Personal Data Protection Act, 2023 introduces consent requirements for processing personal data, which could apply to unauthorized use of a person’s likeness.

Source: https://www.meity.gov.in/content/digital-personal-data-protection-act-2023

Legal experts note that enforcement and clarity remain challenges.

Platform responsibility and its limits

Major platforms claim to be investing in detection and labeling.

YouTube requires disclosure of altered or synthetic content related to real people.

Source: https://support.google.com/youtube/answer/13964833

Meta has announced labeling policies for AI generated political ads.

Source: https://about.fb.com/news/2024/02/ai-generated-content-labels/

Despite these steps, enforcement is inconsistent, especially in regional languages. Automated detection tools still struggle with high quality fakes.

Can deepfakes be detected reliably

Researchers are developing detection tools that analyze artifacts such as inconsistent eye blinking or audio frequency anomalies. However, detection often lags generation.

A survey in ACM Computing Surveys explains why deepfake detection is an arms race.

Source: https://dl.acm.org/doi/10.1145/3446372

For developers, basic detection experiments can be done using open source libraries, though these are not production ready.

# Example: simple face consistency check using DeepFace
from deepface import DeepFace

result = DeepFace.verify(img1_path="frame1.jpg", img2_path="frame2.jpg")
print(result["verified"])
Enter fullscreen mode Exit fullscreen mode

Such tools can assist journalists but cannot replace editorial judgment.

The role of journalism and news literacy

Journalism alone cannot solve the deepfake problem. Audiences must understand how modern misinformation works.

News literacy involves knowing how to question sources, check context, and understand incentives. Platforms such as The Balanced News focus on explaining news processes, bias, and verification so readers can form independent judgments.

Instead of telling people what to think, media literacy initiatives explain how information is produced and manipulated. This approach is crucial in a deepfake era.

What citizens can realistically do

Citizens are not powerless. Practical steps include:

  • Pausing before sharing sensational audio or video
  • Checking whether credible news organizations have reported the same claim
  • Looking for official clarifications from institutions
  • Understanding that absence of coverage can be a signal

Guides published by UNESCO on misinformation literacy offer useful frameworks.

Source: https://www.unesco.org/en/articles/media-and-information-literacy

Policy recommendations for India

Based on current evidence, several steps can strengthen democratic resilience.

  1. Clear legal definitions of synthetic impersonation
  2. Mandatory labeling standards for political synthetic media
  3. Investment in public interest detection research
  4. Support for independent fact checking and media literacy platforms

Think tanks such as ORF have argued for a balanced approach that protects free speech while addressing harm.

Source: https://www.orfonline.org/expert-speak/deepfakes-and-indian-law/

Conclusion

Deepfakes challenge a basic assumption of democratic life that seeing is believing. In India, where digital media is deeply intertwined with politics, the stakes are especially high.

Technology will continue to evolve. Laws will adapt slowly. The most durable defense lies in an informed public that understands both the power and the limits of media.

Strengthening journalism, improving platform accountability, and expanding news literacy efforts, including initiatives like The Balanced News, are not optional extras. They are democratic necessities in the age of synthetic reality.

Sources

Originally published on The Balanced News


Originally published on The Balanced News

Top comments (0)