India is grappling with a deepfake problem. So what is Deepfake?
Deepfakes are essentially morphed videos (and or photos, audio) made from artificial intelligence tools and readily available material. With the evolution in technology, even technologically challenged people can now easily create deepfakes.
Social media platform X is full of deepfake videos as Decode found. But is this a crime?
It can be harmless if one is using it for entertainment purposes – like how AI helped Val Kilmer get his voice back in Top Gun 2; or reimagining Sly Stallone as The Terminator. However, it becomes a crime when deepfakes are used for pornography, identity theft, virtual forgery, to fuel misinformation, attack data and or privacy, or for cyberterrorism. Unlawful acts where a “computer/Mobile/any Electronic device” is either used as a tool or is a target or both are considered a cybercrime. Deepfake is a cybercrime.
If convicted of a crime using deepfakes, the guilty can be jailed for at least three years or more.
Is deepfake limited to video? Not at all. It is pretty easy to clone voices or audio – even Prime Minister Narendra Modi’s. Decode tried some tools - free and paid versions -and though easily discernable, it was able to clone PM Modi's voice in seconds.
Research suggests that deepfake videos, audio, or even photos are typically used to purposefully spread misinformation or destroy someone’s reputation. It is designed to harass, intimidate, or spread fear. Another report suggests that 96 percent of deepfake videos are pornographic in nature, of which they target female celebrities worldwide.
“Tech is always evolving. With AI going global, tools allowing for deepfakes are becoming easily accessible. Take OpenAI’s tool DALL.E for example which helps make images out of text,” cyber security expert Srinivas Kodali said. “The concern is because, at this point in society, even normal people who are not very technologically inclined can use these AI tools and the volume of deepfakes will increase drastically. Now there will be hundreds and thousands of videos coming out,” Kodali said.
Faced with morphed videos that are not easily identifiable, media law professor Olav Albuquerque’s oft and more-than-a-decade-old phrase finds resonance – “Believe nothing you hear and half of what you see”.
Centre rings alarm bells, advisory is issued to social media
A deepfake video of actor Rashmika Mandana and the discovery of other similar pornographic videos misusing the faces of Indian actresses has sparked outrage and forced the Indian government to issue directives to social media platforms to regulate such content.
Days after the video went viral, Delhi police filed an FIR and reportedly questioned a 19-year-old from Bihar in this connection.
Addressing the dangers of artificial intelligence, Union Minister of Electronics and Information Technology Ashwini Vaishnaw said, “a new crisis is emerging due to deepfakes” and that “there is a very big section of society which does not have a parallel verification system” to tackle this issue.
“The Rashmika Mandana deepfake was a wake-up call for the Indians. The high-profile incident has definitely caused some concern and even prompted PM Modi to address it,” Kodali told BOOM.
“Deepfakes are, in 95 percent cases, obscene or vulgar contents and have a grave potential to damage a person's reputation irreversibly as most people watching it may not even realise that it is fake,” law professor Kumar Askand Pandey said.
Even as PM Modi called deepfakes a “new age sankat [problem]”, the latest installment in the ‘Mission Impossible’ franchise—where the antagonist is a sentient, rogue AI called ‘The Entity’—inspired US President Joe Biden to sign off on a law to reign in the misuse of artificial intelligence.
Does the law address the issue of deepfakes?
The current legislation is not adequate to address issues arising from deepfakes. Speaking at a Decode event on November 20, ex-cop Muktesh Chander said it is easier to investigate a murder case than a scam involving monetary loss because there are many links across multiple levels.
“There is no specific law dealing with deepfakes. The general law that requires the intermediaries to take down offensive/fake information from their platforms applies to deepfakes as well,” said Pandey, a professor at Dr. Ram Manohar Lohiya National Law University. “Prosecution of the offenders and conviction is also a far cry as it is a boundary-less crime and by the time a deepfake is reported the damage may be done already,” he added.
“Deepfake is a relatively recent phenomenon and when the Information and Technology (IT) Act was introduced in 2000, there was no way one could have visualised that something like this would emerge in times to come,” Chander said. “That’s why, since there is no specific provision either in the Indian Penal Code (IPC) or IT Act, we have to use provisions which are existing in law as of today to combat cybercrimes,” he told BOOM.
Explaining this further, Chander, who retired as Goa’s DGP, said if one were to find themselves victims of deepfake they could make a case under several provisions of the IPC, the IT Act, and in some cases even the Copyright Act. “The most important section in law is section 66 D of the IT Act which deals with cheating by impersonation through an electronic device. This a cognizable offense and it is non-bailable,” Chander said.
“Other sections of the IPC such as section 420 (cheating), sections 153 (a) and (b) (spreading hate on communal lines) and section 499 (defamation) can also be invoked depending on the facts of the case,” the ex-cop said. “If any content is uploaded online then that can also be dealt with under Section 67, 67a, and 67b of the IT Act. The Copyright Protection Act can also be used in case copyrighted image or video has been used to create fake audio or video,” Chander added.
All the above-mentioned provisions attract at least a three-year jail term, a fine, or both.
However, law professor Dr Nagarathana A said the “major problem” lies in the procedural laws which, even now, provide for procedural rules made to suit the investigation and trial of a conventional crime. “Similarly, there are problems in evidence law. Unless these are altered, such crimes cannot be effectively regulated,” the professor from Bengaluru’s National Law School of India University (NLSIU), said.
Law playing catch-up with technology
Ex-cop Muktesh Chander said the law will always play catch up with technology. “The cyber technology is advancing at a pace where it is impossible for any law to catch up with them,” he said. “Any law will become obsolete in a year or two. Deepfake instances have amply demonstrated that. To keep up, we must constantly bring in amendments to the IT Act, Digital Personal Data Protection Act, and associated rules from time to time,” Chander added.
Dr Nagarathana said the “major challenge” is legally understanding the way the technology functions. “Unless the investigating and the prosecuting officers understand these technological technicalities and are able to relate them to the correct legal provisions (if any), they will not be able to effectively investigate as well as prosecute these kinds of offenses,” she told BOOM.
Nagarathana, who is also the co-director, of NLSIU's Advanced Centre on Research, Development, and Training in Cyber Law and Forensics, said it is “important that the trial court is also able to understand these aspects while appreciating the evidence.”
“The problem also lies in the law to an extent since not all crimes committed with the use of AI are well covered under the existing provisions,” she added.
To solve the deepfake problem, must address harm and not the means
Professor Nagarathana said it is necessary to create a law that is designed in such a manner that it addresses the problems rather than the modes. “For example, ‘creating any fake content to cause disrepute, damage, injury, etc to any person’ can be an offence irrespective of the technology used to create such fake content,” she said adding that it is “important that we redesign some of our laws in this manner so as to keep it ready to tackle forthcoming forms of crimes.”
“As long as the object of law is to curb harm, harms committed in novel ways or with novel technology can be read into those laws. This also requires wider but meaningful interpretation of laws by the judiciary,” Nagarathana said.
Advocate Mrinal Shankar said social media platforms are still trying to figure out technology that automatically finds deepfake videos. “No one has successfully managed it yet,” the advocate told BOOM. “The social media platform’s status as an ‘intermediary’ protects them in a limited way – that they just host content and are not content creators,” Shankar said.
Kodali said every major entity and big tech has clamped down on known AI tools and has limited the availability of it. “Codes shared on GitHub have all been noted,” he said adding that this industry regulation of a certain level has already taken place in the West, but we haven’t seen it in India yet.”
“Though, India is also trying to clamp down,” he added.
Controlling deepfakes during election season a “delicate balance”
India is new to the deepfake problem. The West has been dealing with it for a while, but it is relatively new for the Indian society. Advocate Mrinal Shankar said social media patrolling of deepfakes is a “delicate balance”.
Till now, pornographic videos were the mainstay for deepfakes, however, these videos have now infiltrated poll campaigns.
“If deepfake is pornographic in nature, once identified the social media platforms take it down without any official (police or judicial) order,” Shankar told BOOM. However, it gets tricky when it comes to poll campaigns, he said.
“As an intermediary, I must be presented with definite proof that a certain video is deepfake. The evidence must unequivocally demonstrate its deepfakeness. Because many times, in a political atmosphere, if a politician is caught in a compromising position, they may claim victims to deepfake,” Shankar said.
“Even then, a social media intermediary may choose to act,” he said adding that otherwise it’s a slugfest where all sides blame each other, and social media platforms are caught in the crossfire where they are asked to decide on their own and take action.
There are several other instances where deepfakes have wreaked political turmoil.
Deeptrace’s 2019 recaps instances where deepfakes were used for political reasons. Nancy Pelosi’s video where she allegedly slurs, the Malaysian sex scandal allegedly featuring the country’s Minister of Economic Affairs Azmin Ali and a rival minister’s male aide, the Gabonese political unrest, and the attempted coup there are some of the examples.
The same report has also pegged India as the sixth-most vulnerable to deepfakes.
More recently, Argentina became the testing ground for AI as presidential candidates took to deepfakes to portray the other in a negative light. While Javier Milei was portrayed as a cuddly lion, his contender – Sergio Massa, was seen as a Chinese communist leader.
In April 2023, in Tamil Nadu government was hit with a deepfake scandal when its minister Palanivel Thiaga Rajan (PTR) claimed two leaked audios, where he is allegedly speaking against CM MK Stalin’s family, were fakes. PTR was forced to resign as finance minister, and in the cabinet reshuffle, he was given the portfolio of IT ministry.
According to a VICE report, BJP IT Cell’s partnership with The Ideaz Factory, a political communications firm, in 2020 to create “positive campaigns” using deepfakes marked its debut in the Indian elections.
With election season underway, several doctored videos surfaced online – many of them being from the popular game show Kaun Banega Crorepati (KBC). Political parties across the aisle have been sharing morphed videos.
But who will regulate this? Probably the election commission since it is in charge during election time.
However, Kodali says deepfakes are more of a social problem as opposed to an election problem. This will be the first [general] elections after AI tools have become accessible to a billion people, he said.
It remains to be seen how people will use generative AI, and it is yet to be seen how it will impact society, the independent researcher said adding, “can’t say for sure that it will create problems, but it will definitely create challenges and how society reacts to it remains to be seen.”
“So not super scared, yet,” Kodali said.