In January, 2025, BOOM published 89 fact-checks in English, Hindi and Bangla.
We published 21 fact-checks addressing AI-generated claims, marking the highest number of AI-related fact-checks in a single month to date. This is a significant increase from our previous record of 12 AI-related fact-checks.
These fact-checks covered topics such as the Maha Kumbh, the wildfires in Los Angeles, the Delhi Assembly elections, and more.
The majority of misinformation we found was around the ongoing ‘Maha Kumbh Mela’ held in Prayagraj, Uttar Pradesh. Other major topics included Islamophobic narratives, the Delhi Assembly elections and the fires in Los Angeles.
The Muslim community in India was the main target of misinformation, comprising 14.6% of all false claims. This was followed by claims directed at Prime Minister Narendra Modi and Delhi Chief Minister Arvind Kejriwal.
We also published 21 fact-checks addressing AI-generated claims, covering topics such as the wildfires in Los Angeles (5 fact-checks), the Maha Kumbh (4 fact-checks), the Delhi Assembly elections, and more.
41.5% out of the total fact-checks involved claims that were shared by verified accounts on X.
Misinformation in 37.1% of the total fact-checks were peddled using old and unrelated videos.
These mainly include Islamophobic claims specifically targeting the Muslim community, as well as false or fake claims directed at social media content creators and celebrities.
Theme Assessment
Maha Kumbh 2025
We published 22 fact-checks related to the Kumbh Mela, of which 31.8% (7 fact-checks) involved AI-generated images of celebrities used to spread misinformation, falsely showing them attending the event. For instance, fake images of X owner and Tesla CEO Elon Musk, along with professional wrestlers John Cena, Brock Lesnar, and Roman Reigns, went viral claiming they were at the festival.
Taking a closer look at the images, we noticed the watermark of X Social Media's chatbot, Grok AI, at the bottom of all the images. We then verified these pictures on the AI verification site, Hive Moderation, which confirmed they were AI-generated.
Similarly, fake images of Indian movie stars such as Shah Rukh Khan, Salman Khan, Akshay Kumar, Allu Arjun, Sonakshi Sinha, Tamannaah Bhatia and Prakash Raj taking a dip at the Kumbh were also viral, being presented as real.
Another fake image of a letter went viral on social media, falsely claiming that godman Dheerendra Shastri had predicted the stampede at the Kumbh Mela, which resulted in 30 deaths and several injuries.
We ran the image through Hive Moderation and another AI detection tool, SightEngine, both of which indicated a 95% likelihood that the image was AI-generated.
Delhi Assembly Elections 2025
BOOM published 12 fact-checks around mis/disinformation around Delhi’s Assembly elections.
Arvind Kejriwal was the most targeted politician (5 fact-checks), with false claims made against him. Misinformation was also directed at PM Modi (3 fact-checks), the Aam Aadmi Party (2 fact-checks) and Rahul Gandhi (1 fact-check).
On the other hand the BJP and its party members spread the most misinformation on social media. BOOM published 5 fact-checks which consisted of claims mostly targeting AAP or Kejriwal.
For example, a video of Kejriwal was posted on BJP Delhi’s Facebook page, where he is seen saying that he has started to understand politics a little, and that cleaning the Yamuna river won't get him votes. The BJP's post claimed, “Kejriwal has now started understanding a little politics and now he has also understood that cleaning Yamuna ji, the symbol of faith and devotion of Hindus, will not get AAP votes. He is an extremely cunning man.”
However, BOOM found that the video had been cropped. In the original video, Kejriwal is talking about his efforts to clean the Yamuna river, even though it won't earn him votes.
AAP too, in one instance, posted an AI clip on Instagram and X, showing a luxurious palace-like residence and claiming it was the newly proposed residence of PM Modi as part of the Central Vista project.
The video was shared by AAP with a Hindi caption, which translates to English as, "Big Breaking... The video of the royal palace has come before the public for the first time. Is this why the doors of the royal palace are not opened for the public?"
BOOM ran the video by our partners at the Deepfakes Analysis Unit, a public awareness platform combating misinformation, who provided further evidence confirming that it was entirely AI-generated.
Misinformation around Los Angeles Wildfires
A series of wildfires in Southern California in early January sparked misinformation online, including AI-generated videos of firefighters rescuing animals that were presented as real footage.
Additionally, an old and unrelated video was falsely shared, claiming that Muslims had started calling for prayer to extinguish the fires in Los Angeles.
Social media was also flooded with misinformation, including a video that falsely showed American pop star Taylor Swift calling the ongoing wildfires a "divine retribution" against the United States for supporting Israel's bombings in Gaza.
However, several deepfake detection tools revealed a high likelihood that voice cloning and lip-syncing algorithms had been used to manipulate the video.
Medium, Intent & Type of Deception
51.7% of the 89 fact-checks were shared via videos, followed by images (39.3%), texts (7.9%%) and audio (1.1%).
Regarding the intent behind spreading mis/disinformation, 93.3% of the total fact-checks were under the “Sensationalist” category. This was followed by “Smear Campaigns” against Indian political leaders (5.6%) and the intent of spreading “Demographic Anxiety” (1.1%).
52.8% of the total fact-checks consisted of false content, followed by fabricated (23.6%) and misleading content (12.4%).