Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available

Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available
Voices

Business As Usual For Facebook, As Hateful Content Thrives In India: Study

The company chose to hold back the human rights impact assessment it commissioned to review the risks to human rights in India linked to its platforms.

By - Archis Chowdhury | 18 July 2022 3:20 PM IST

A recent study by a Europe-based human rights organisation found hate-related content thriving on fan pages of controversial figures embroiled in hate speech cases in India, on Meta-owned Facebook. It found significant spikes in interactions in these pages, peaking around inflammatory content showing calls for violence and genocidal statements involving these individuals.

The damning study, by Foundation The London Story (FoundationTLS), was published shortly after Meta released a summary disclosure of a human rights impact assessment (HRIA) commissioned nearly three years ago, to assess potential risks to human rights in India related to its flagship social media platform. The disclosure, deemed by academics and advocacy groups to be inadequate, has done little to assuage fears of growing human rights violations in India involving Facebook.

In the disclosure, the company claims that it "has taken steps to expand its human rights team" and that it has "significantly increased its India-related content moderation workforce and language support", but provided no indication to the nature of the risks to human rights, nor the details of the recommendations made by the HRIA

However, the study, led by Ritumbra Manuvie, Lecturer of Law at the University of Groningen and Executive Director of FoundationTLS, observed, "Despite Meta's claim that it curbs hate actors' ability to use the platform, networks in Facebook India seem to grow rapidly, and their hate content is publicly available."

Last year, Facebook changed its name to Meta to denote the parent company of its various platforms, including Facebook, Instagram and WhatsApp. In this article, the names Meta and Facebook have sometimes been used interchangeably to refer to the parent company.

Also Read | WSJ Exposé On Facebook & BJP Triggers Political Row In India

Hate Fest In Fan Pages

The study, titled "Preachers of Hate: Documenting Hate Speech On Facebook India", monitored over 600 pages from January 1, 2019, to December 31, 2021. The study highlights how "extensive fan page networks are using Facebook to widely amplify hate speech and calls to violence and genocide against Indian Muslims."

For the purpose of the study, the hateful content flagged were classified under three different themes: mobilisation of Hindus against Muslims, xenophobic content targetting Muslims, and calls for elimination of minorities, especially Muslimd.

The study focused on pages supporting Hindutva leader Yati Narsinghanand, Sudarshan TV editor-in-chief Suresh Chavhanke, and right-wing influencer Pushpendra Kulshrestha. "We selected these actors due to ongoing controversies involving hate speech," it noted.

While speaking to BOOM, Manuvie further added, "We wanted to focus on Sudarshan Chavanke and Yati Narsinghanand due to the recent events in India which saw both these actors actively organising hate rallies and engaging in direct or indirect hate speech. Interestingly we found Pushpendra Kulshreshta fan pages to be a mouthpiece not of Mr Kulshreshta but also of the other two actors, which was the reason to include them in this particular report."

"We would also like to look at the Islamist groups, but our language limitation defeats us there - although there are studies which actively look at Salafi Jihadism and jihadist narratives. For now, our team's language capacity is only limited to Hindi, English, Punjabi, and Malayali among Indian Languages," she added.

On December 19, 2021, Chahvanke made an inflammatory speech at an event organised by Hindu Yuva Vahini in Delhi, and administered an oath to a large gathering, while making clear incitement to violence. Videos were soon viral on social media, where he could be heard egging the crowd to say "in order to make this country a Hindu nation and to keep it a Hindu nation, and to move forward, we will fight, die and kill, if required."

Just a few days before at a Dharam Sansad in Haridwar, Narsinghanand and other religious and political leaders, including those linked to the BJP, engaged in various forms of hate speech, spreading conspiracy theories on Islamic takeover of India, and inciting violence against Muslims.

Manuvie's study found such events to coincide with significant spikes in interactions in the fan pages being monitored. In all these instances, inflammatory and hate-inciting videos were found to be at the heart of these interactions.


The study noted, "Content from the Dharam Sansad was also shared through Facebook (screenshot above), including content advertising and inviting people to join upcoming Dharam Sansads."

During the course of the study, a number of inflammatory content were flagged by Manuvie and her team. These videos, often providing dehumanising tropes and interactions between the controversial figures whose fan pages were studied, were deemed by them to flout Facebook's own community guidelines on hate speech and dangerous content.

The study provided screenshots of the posts they had flagged, and stated that they still remained online at the time of publishing their report.

Examples of posts flagged by Foundation The London Story during their study. These posts, they noted, were still online.

It also used CrowdTangle, a Meta-owned social media monitoring tool, to track the interactions across the pages and found significant growth in followership during the period of the study. It found Naringhanand fan pages to grow by 871.01%, and Kulshreshtha's own verified page to grow by 499.5% in followership during this period.

While Facebook claims to "protect user safety" in its summary disclosure, the study by Manuvie shows the exact opposite - dangerous and hate-related content were shown to thrive on the platform with barely any moderation or action from Facebook.

Burying The Lede

In 2019, following increasing criticism around the use of Facebook to diffuse hateful and discriminatory speeches, the platform commissioned an independent project to assess the risks to human rights related to the platform, to law firm Foley Hoag LLP.

The assessment involved "interviews with 40 civil society stakeholders, academics, and journalists", along with review of Facebook's content policy and a survey of "over 2000 individual rights holders".

Manuvie was one of the people to be interviewed in the process, along with Apar Gupta, the Executive Director of India-based advocacy group Internet Freedom Foundation. Earlier this year, IFF and FoundationTLS were both part of a group of organisations that signed a public letter to Meta to urge the company to release the HRIA unredacted and in its entirety.

The unredacted report never got released, with Meta opting to publish the 'summary disclosure' instead. For the participants of the HRIA, including Gupta and Manuvie, the crux of the assessment was swept entirely under the carpet.

"Facebook's release of HRIA was inadequate," Manuvie told BOOM. "As a law firm specialising in BHR (business and human rights) they (Foley Hoag) acted professionally and with independence. However, Meta did not release their report, and did not share any part of the report provided by the law firm with the stakeholders. Meta took a thin excuse of Principle 21(c) of UNGP BHR in suggesting that the release of the report in its entirety will harm the stakeholders. As one of the stakeholders I think this risk assessment should be left to us, Meta can share the complete HRIA internally with stakeholders and openly ask us if we would like to be quoted or not."

She added, "I believe that this report not only hides the extent of Human Rights violations Meta's products and relationships in India are causing it also takes away the sanctity of conducting HRIAs and reduces it to a checklist exercise. This is unacceptable to civil societies."

IFF too provided a detailed statement on the summary disclosure. It noted that past HRIA reports by Meta, on countries like Myanmar, Sri Lanka, Indonesia, Philippines and Cambodia provided "detailed and extensive analysis on the human rights impacts as well as recommendations to Meta". These reports ran from eight pages to 62 pages (as in the case of Myanmar) in length, and provided details on the risks posed by the abuse of the platform.

In comparison, IFF observed that the India-specific summary disclosure - only four pages in length - simply covered "apparent" insights and risks, and failed to provide any details on the nature of the risks, or the recommendations made.

Gupta, in a public statement, said that parts of the summary disclosure did not reflect the inputs provided by him.

He is referring to Meta stating that its "platforms had provided an invaluable space for civil society to organize and gain momentum, provide users with essential information and facts on voting, and also enabled important public health updates". "For any honest accounting to commence at the very least the full contents of the human rights impact assessment requires urgent disclosure," he added.

Is Clamping Down On Hate Bad For Business?

In August 2020, the Wall Street Journal reported that Facebook overlooked infringements of its own rules on hate speech by members of the ruling-BJP, to avoid backlash by the government.

The article stated that Facebook India's Public Policy head Ankhi Das advised the platform against taking action on hate speech by members of the BJP, to avoid "damaging the company's business prospects in the country". The article specifically mentioned videos of BJP leader T. Raja Singh being flagged for hate speech and being marked as dangerous by Facebook employees. The videos in question showed Singh saying that Rohingya Muslims should be shot, that Muslims are traitors, and that mosques should be demolished.

The suggestion was made to permanently ban him from the platform, according to WSJ. However, it mentions that Das opposed applying hate speech to Singh's videos, thus letting him remain on the platform.

Such instances of favouritism towards India's ruling party, and its supporting Hindutva nationalist ecosystem did not end there.

A year later, former Facebook employee-turned-whistleblower Sophie Zhang made some more revelations. While working for Facebook she detected sophisticated networks of fake accounts run by real humans to drive up fake engagement in the platform. India was one of the many countries she detected engaging in such activities.

What Zhang effectively identified was the presence of 'IT Cells', and her investigation revealed that such fake engagement was drowning out real voices of Indian people.

Also Read | FB Whistleblower Says LS Speaker Doesn't Want To Hear Testimony On Desi IT Cells

She highlighted five different networks of IT Cells in India - two belonging to the Congress, one belonging to the Aam Aadmi Party, and two belonging to the BJP. However, after she highlighted the issue to Facebook, she found all these networks eventually being taken down, with the exception of one - a network of fake accounts linked to BJP parliamentarian Vinod Sonkar.

Over the past few years, the Indian government has taken measures to enforce compliance by intermediaries (social media platforms) over its own requests to moderate content on the platform. These attempts eventually culminated in the introduction of the Intermediary Rules 2021, also called the new IT Rules.

While microblogging platform Twitter has sued the Indian government over the latter's orders to take down content that were critical of the government, Meta was one of the companies that agreed on compliance.

However, with increasing worries over the state of human rights in the country, how far the company is willing to comply with the government remains to be seen.

BOOM has written to Meta seeking comments over the allegations of inactions on dangerous and hate-related content in India, along with the exclusion of details on risks and recommendations in the HRIA. The article will be updated if and when we get a response.

Disclaimer: BOOM is part of Facebook's third-party fact-checking programme.

Tags: