Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available

Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available
Law

Fed Up With Deepfakes, Journalist Rajat Sharma Seeks Legal Action

Journalist Rajat Sharma is waging a war against deepfakes. His PIL before the Delhi HC seeks government regulation of the same.

By - Ritika Jain | 12 Jun 2024 11:28 AM IST

Journalist Rajat Sharma has had it with the many fake ads on Facebook and Instagram made with AI voice clones impersonating him to peddle weight loss drugs and medication to cure type two diabetes.

The IndiaTV News chairman and editor-in-chief filed a Public Interest Litigation or PIL in the Delhi High Court last month in May seeking to stop the proliferation of deepfakes on social media in India.

The PIL relies on a BOOM fact-check which found viral videos of news anchors were fake and were overlaid with voice clones promoting a diabetes medicine.

“Seeing videos depicting me endorsing medicines for diabetes and weight loss, which I never did, has shaken me to the core,” Rajat Sharma told BOOM. In his plea, Sharma said that he was “shocked” by the blatant misuse of his likeness, voice, and face.

“This encounter has highlighted how vulnerable individuals are to such attacks and has emphasised the urgent need to combat the misuse of deepfake technology,” Sharma said on his decision to file the plea.

One such deepfake video using an AI voice clone of Sharma can be seen below.

Full View


In his plea, Sharma said the proliferation of deepfake technology poses significant threats to various aspects of society, including but not limited to: Misinformation and disinformation campaigns, undermining the integrity of public discourse and democratic processes; Potential use in fraud, identity theft, and blackmail; Harm to individuals' reputations, privacy, and security; Erosion of trust in media and public institutions; and Violation of intellectual property rights and privacy rights.

Since 2023, fake ads using real videos of popular Hindi news anchors such as Ravish Kumar, Anjana Om Kashyap and Arnab Goswami have mushroomed over social media platforms such as Facebook and Instagram, to peddle spurious drugs which often promise cures for weight loss, diabetes and arthritis.

Increasingly, scammers rely on AI-generated voice clones to falsely claim that these news readers, who are household names in India, have endorsed these drugs and have spoken about their efficacy.

Besides Sharma, Sudhir Chaudhary in particular is popular in this genre of fake AI-generated voice clone ads.

BOOM reached out to some of the above-mentioned anchors for a comment but did not hear back till the time of publishing.

In May this year, AI-generated voice clones of Uttar Pradesh Chief Minister Yogi Adityanath and Bharatiya Janata Party MP Hema Malini joined the long list of doctored celebrity videos promoting a fake diabetes cure.

It’s not just scam ads that news anchors need to worry about. Deepfake technology and specifically AI-voice clones have been misused to peddle disinformation in India.

Just days before Delhi went to the vote in Phase VI of the 2024 Lok Sabha Elections; AI voice clones of Sudhir Chaudhary and News 24 anchor Manak Gupta went viral purporting to show an Aam Aadmi Party (AAP) candidate ahead in opinion polls.

The voice clones were overlaid on existing videos of the two anchors along with fake graphics to bolster the false claim.

Also Read: Meta Platforms Littered With Political Deepfakes Under The Garb Of Satire

“No Solution To Deepfakes, Can't Ban Technology”

Efforts to combat these AI voice clone-enabled scam ads through human-written fact-checks are now falling short as the ease of generating deepfakes has turned mitigation efforts into a game of whack-a-mole.

Indian Institute of Technology (IIT)-Bombay professor Anurag Mehra says there is no solution to the deepfake problem. “Earlier, creating deepfake videos was possible only on big computers, but now this can be done using apps or a browser to access online services,” Mehra told BOOM.

“It is difficult to fight this menace because of two aspects. A) Catching up with newer technologies is always reactive - you need to understand how it is being done and then figure out ways to detect the fakes; B) the level of sophistication in creating these deepfakes is so high that it becomes harder and harder to detect,” Mehra said.

Mehra said this is more from a “forensic point of view, for law enforcement to identify deepfakes.” “For ordinary users the situation is much worse because they don’t have the tools to do the detection. They see something and it seems so real so that take it to be real. Someone who is aware of the issue may ask fact-checkers like BOOM or some other agency. The recognition that a video may be fake or manipulated cannot really be done by non-technical people but by skilled experts or specialised companies,” he said.

“You can't really ban the technology...so there's no escape from this,” Mehra added.

Sharma’s point is similar to Mehra’s. “This technology erodes trust in media, making it difficult for people to discern genuine content from manipulated media, thereby undermining the credibility of information sources,” the Aap Ki Adalat-anchor said.

BOOM reached out to Meta to understand the platform’s strategy to combat deepfakes.

However, a spokesperson directed us to a statement that didn’t address specific questions we had.

"Need action first, examine later policy" on deepfakes 

Currently, India does not have any official AI policy. In 2020, the Centre released a draft of the National Strategy for Artificial Intelligence to deal with this issue. Though, there is an ongoing discussion, experts point out that speed is of the essence.

Advocate Rohan Swarup told BOOM that currently the response time to viral fake news or social media posts is too long.

“The timeline for response needs to be sharper,” Swarup said. He said no one wants a deepfake video to stay online, there are no competing interests and are solely made for malicious purposes. “In such a scenario, we need to have an action first, examine later policy,” Swarup added.

Swarup is part of Rajat Sharma’s team that is putting up a legal fight against deepfake and AI-generated content.

Sharma experienced this first-hand. The petition is also partly because Sharma found there was no dedicated mechanism to address the issue of deepfakes when he unsuccessfully attempted to take down the video where he is allegedly selling diabetes medicines. “This personal experience has shed light on the dangerous capacity of deepfakes to manipulate public opinion, distort narratives, and erode democratic processes," he said. Sharma added that his encounter with deepfakes also highlighted how vulnerable individuals are while emphasising the urgent need to combat the misuse of such technology.

India is also seeing increasing instances of deepfake pornography and AI-based non-consensual imagery, in line with the rest of the world. Rajat Sharma said another prevalent risk involves AI facilitating financial fraud through cyberattacks, phishing, and automated scams, resulting in substantial financial losses. “It is imperative to address these issues through the establishment of robust ethical guidelines, regulatory frameworks, and heightened public awareness to ensure a balanced approach that mitigates AI's risks without overlooking its societal impacts,” he said.

Decode has written about the various scams using AI-generated content.

Last month on May 28, the Delhi High Court said we live in a world of deepfakes while dismissing a husband’s submission of photos as proof of his wife’s alleged infidelity.

Before that in April, the high court took note of the rising instances of deepfakes and observed that it has the “potential” to create “irreparable harm” while hearing senior advocate Gaurav Bhatia’s plea to take down ‘deepfake’ videos that allegedly showed him being assaulted by lawyers at a Noida court in Uttar Pradesh.

In December 2023, when advocate Chaitanya Rohilla filed the first PIL seeking regulation of deepfakes the high court acknowledged the dangers and prodded the government to act on the issue.

“This technology is now available in the borderless world. How do you control the net? I can't police it that much. After all, the freedom of the net will be lost. So, there are very important balancing factors involved in this. You have to arrive at a solution that balances all the interests,” the high court had said. “It is only the government, with all its resources, that can do it. They have the data, they have the wide machinery, and they have to make a decision about it. It is a very, very complicated issue,” it added.

“Deepfakes have emerged as a new threat to democracy… weakens trust in the society and its institutions,” Ashwani Vaishnaw, then Information and Technology Minister had said in November 2023.

Announcing the government’s regulation against AI-generated malicious content, Vaishnaw said the regulation could also include financial penalties on the creator as well as the platform hosting such content.

Courts can bring in guidelines till a law is introduced

Senior advocate Sandeep Sethi said blocking orders are the first line of defence. “Dynamic injunctions can give immediate relief. But it is a stop-gap situation. In my opinion, one needs to tackle the problem at its source,” Sethi said, referring to the apps that enable and allow for creation of deep fake videos. “We need to shut down such websites. They are more dangerous for its ease of access and its low cost. The authorities must look into this and take action under the IT Act,” Sethi added.

“Deepfake videos are malicious and not only harm the public at large, but it is also harmful for the personality being impersonated,” Sethi said.

“In today’s day and age when everything is online, deepfakes are dangerous for the virality it can achieve. These videos are shared in large numbers and have a wide reach before anyone can take action to bring it down. By the time the videos are taken down, it has already spread to millions,” he added.

Sethi said deepfake videos are not only harmful for the public at large, but it also harms the personality who has been impersonated, or misrepresented. “In Rajat Sharma’s case, deepfake videos not only damage a personality like him, but also victimises his audience who may land up believing the deepfake videos and the message it propagates,” Sethi said.

“For now blocking order is the only option. The regulator has to be far more vigilant,” the senior advocate added.

Also Read: Deepfakes Underwhelm As Political Parties Rely On Cheap AI Voice Clones

Tags: