Deepfake videos - digitally altered audio or video footage produced using artificial intelligence (AI) to make a person appear to say or do something they never did - have sharply increased in India in recent years. Deepfakes have become a major danger to privacy, dignity, democracy, and national security, ranging from phony political statements and celebrity videos to explicit content involving common people.

The realism of deepfakes is what makes them so dangerous. Deepfakes are frequently indistinguishable from authentic footage, in contrast to conventional fake videos. Even a non-technical person may produce a convincing fake video in a matter of minutes thanks to the readily available internet AI tools.

This raises a critical legal question:

“Is Indian law prepared to deal with the growing menace of deepfake videos?”

This blog examines deepfakes, their effects, the current Indian legal system, court rulings, difficulties with enforcement, and the necessity of legal reform.

WHAT ARE DEEPFAKE VIDEOS?

The phrase Deepfake comes from: “Deep learning” a subfield of artificial intelligence, and “Fake” made-up or fabricated content. In a deep fake video, a person’s face, voice or actions are realistically superimposed or replaced with another person’s likeness using machine learning algorithms. Types of Deepfakes: - Face swap Videos, Voice cloning, Lip sync manipulation, Synthetic full-body videos, Explicit or pornographic deepfakes.

WHY ARE DEEPFAKES A SERIOUS LEGAL CONCERN?

Deepfakes pose a legal and constitutional issue in addition to being a technological one. Principal dangers of deepfakes: -

a)Privacy violations – Deepfakes intrude into a person’s personal life, image, and identity.

b) Damage to Reputation – In a matter of hours, fake videos may ruin people’s professions, reputations, and mental health.

c) Manipulation of Politics – Videos and speeches that are deepfake can: affect elections, distribute false information, encourage violence.

d) Damage to Gender – Women are frequently the target of deepfakes, especially when it comes to non – consensual explicit content.

e) Danger to National Security – False footage of judges, military personnel, or political figures can incite agitation or fear.

 DEEPFAKE INCIDENTS IN INDIA

India has already seen a number of concerning deepfake incidents:

→ Deepfake films, featuring influencers and actors endorsing phony goods.

→ Fake videos with political motivations during elections.

→ Online, explicit deepfake films of women were used for extortion and harassment.

These instances highlight how urgently strong enforcement and legal clarity are needed.

Real Life Instances: -

1-Deepfake video of Rashmika Mandanna (2023) -

Rashmika Mandanna, a well-known actress was implicated in one of the most talked about Deepfake instances in India. A women wearing skimpy clothes enters an elevator in a modified video that went viral on social media. Rashmika Mandanna’s face was digitally placed using AI technologies, even though the body belonged to someone else. In a matter of hours, the video went viral. Impact on the law and society: - The actress strong denounced technology abuse. The incident sparked a national conversation about women’s online safety. In accordance with the requirements of the IT Act and IPC, Delhi police filed a complaint. This example demonstrates how a women’s identity can be weaponized by deepfakes without her knowledge or consent.

2- Deep fake videos of Prime Minister Narendra Modi (2023-24) -

Prime Minister Narendra Modi appeared in a number of Deepfake videos that become viral online, including: - Speaking in regional tongues that he had never used in public, seeming to support plans or claims he never made, some videos were made for humour or harmless translation, but others were deceptive and politically sensitive. Legal issues – Danger of false information, danger to the integrity of elections, possible discontent among the people. These occurrences demonstrated the threat Deepfakes represent two public confidence and Democratic processes.

3- Deepfake audio scam using CEO’s voice (India, 2023) -

Scammers exploited AI generated voice cloning to pose as a top company leader in a significant financial fraud incident. After receiving what appeared to be a real phone call from the CEO, an employee was duped into sending a sizable amount of money. Legal concerns: - Impersonation and cheating, cybercrime, lack of particular anti-fraud provisions. This case demonstrated that audio deepfakes can be just as a harmful as video deepfakes.

4- Deepfake political campaign videos during elections -

According to reports Deepfake technology was employed during the most recent elections in India to: translate political speeches into several languages, make fictitious endorsements, distribute deceptive advertising content. Although some political parties refer to these as “AI assisted outreach tools,” experts cautioned that voters may be misled by unlabelled synthetic content. The Gray area of law: No requirement to disclose political content produced by AI, election regulations are not strictly enforced to prevent deepfakes, possible transgression of the model code of conduct.

5- Deepfake pornography targeting Indian women -

Numerous instances have surfaced in which common women, including professionals and students, were singled out by explicit photos produced by AI, videos of morphed pornography, extortion and blackmail. Legal repercussions: severe psychological trauma, stigma in society, restricted immediate fixes. The majority of instances were recorded under – IT Act sections 67 and 67A, IPC sections 354 and 509. This illustrates how Deepfake abused in India is gendered.

6- Deepfake news anchors and fake bulletins -

AI generated movies that mimic news anchors have occasionally been used to spread false news and phony bulletins on social media and WhatsApp. These videos: Appear to have been produced professionally, carried reputable channels logos, misled a sizable portion of the people. This sparked grave worries about the reliability of the media and the false information.

 IS THERE A SPECIFIC LAW ON DEEPFAKES IN INDIA?

No, there isn’t yet a specific law in India that defines and make “deepfakes” illegal. Rather, a hodgepodge of current legal regulations, primarily under cyber law, criminal law, data protection legislation, constitutional rights, and intermediatory rules, indirectly control deepfakes.

Constitutional law:

Right to privacy (article 21) -

The Supreme Court ruled in Justice K.S. Puttaswamy vs Union of India (2017) that, in accordance with article 21 of the constitution, privacy is a fundamental right. Deepfakes, particularly those that mimic someone’s voice, appearance or picture, can infringe upon - Privacy of information, privacy of images, physical integrity, individual liberty. Although the constitution does not specifically address deepfakes, privacy Jurisprudence offers a solid foundation for legal protection. In order to prevent their digital identities from being misused, victims might use article 21.

Freedom of speech (article 19) with reasonable restrictions -  

Although there is right of freedom of expression, it can be reasonably restricted for a reasons like defamation, law and order, ethics and morality. These grounds allow for the restriction of deepfakes that cause injury to one’s reputation or encourage hatred.

Indian penal code (IPC, 1860):

Several sections of the IPC indirectly address the problems caused by deepfakes, despite the fact that it predates digital technology:

Section 499 – Defamation -

Defamation liability may apply to deepfakes that harm someone’s reputation by portraying them saying or doing something untrue. The maker or publisher of a deepfake video may face criminal charges under section 499 (defamation) and section 500 (punishment for defamation) if the film damages someone’s reputation.

 Section 469 – Forgery for reputation -

This makes forgeries that are meant to damage someone’s reputation illegal. According to the IPC, a deepfake video that misattributes words or actions may be considered digital “forgery.”

Section 354A, 354D and 509 – Sexual Harassment/ stalking/ insult to modesty -

Deepfakes that target women and contain sexually explicit content fall under the following categories:

Section 354A – Sexual harassment

Section 354D – Stalking

Section 509 – Word/ gesture intended to insult modesty of a women

Non- consensual representation is recognized as a crime under these provisions.

Information technology Act, 2000 (IT Act):

The main cyber law in India is the IT Act, which is also the closest structure now in place to address the harm that deepfakes create online.

Section 66C – Identity theft -

When someone is electronically impersonated via a deepfake, it could be considered identity theft under:

Section 66C – Punishment for identity theft  

Anyone who uses another person’s password, electronic signature, or other distinctive identifying feature in a dishonest or fraudulent manner.

Useful when a deepfake poses as a genuine person in order to commit fraud.

Section 66D – Cheating by personation -

This clause makes exploiting a computer resource to cheat illegal:

Cheating by personation using computer resources, anyone who cheats by personation using any computer resource or communication device. This section is applicable if a deepfake is used to trick or defraud someone by posing as someone else.

 Section 66E – Privacy Violation -

It is illegal to take, publish, or send private photos without permission under section 66E. This is particularly pertinent to deepfake pornography that is not consenting.

Section 67, 67A & 67B – Obscene Material

These sections address the circulation and publication of pornographic material on the internet -

Section 67 – Publishing obscene content

Section 67A – Publishing sexually explicit acts

Section 67B – Child pornography

Here, deepfakes that contain sexually explicit material may be held liable.

Section 72 & 72A – Breach of Confidentiality & Privacy

Section 72 penalises breach of confidentiality by intermediaries.

Section 72A (added later) penalises breach of privacy by anyone with access to personal data.

This is important in cases where stolen personal information is used to create deepfakes.

Information technology rules, 2021

Intermediaries (social media platforms, hosting services) are required by the IT Rules 2021 to handle harmful content.

Under the Rules, intermediaries must: - Remove unlawful content within 36 hours of notice, provide grievance mechanisms, identify the first originator of the harmful content, comply with due diligence to retain safe harbour protection under Section 79 of the IT act. If platforms fail to act against deepfake content – they might no longer be protected by safe harbour. If they host such content, they could be held accountable. However, platforms interpret “unlawful content” broadly because the Rules do not define deepfakes precisely.

 CRIMINAL PROCEDURE AND ENFORCEMENT

Even though there’s no standalone “deepfake law,” law enforcement often uses provisions from: - Criminal procedure code (Crpc) – to register FIR’s, investigate, and prosecute, Cybercrime cells and forensic labs – to trace creators and servers. In reality, a large number of deepfake lawsuits have been filed under the IT acts section and IPC sections that are relevant.

CONCLUSION

One of the most harmful connections between artificial intelligence and misinformation is deepfake technology. Indian law is not entirely prepared to handle the scope and complexity of deepfake dangers, even though it provides some limited remedies under current statutes. Technology must advance at the same rate as the law. Deepfakes will continue to jeopardize human dignity, democracy, and privacy in the absence of a clear and comprehensive legal framework. India is at important point where proactive regulation or reactive enforcement must be chosen.

Stay tuned for more blogs on technology law, cybercrime, and contemporary legal developments.

Post a Comment

Previous Post Next Post