Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available

Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
BOOM LabsNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available
Explainers

YouTube Algorithm Recommends Videos That Violate Its Own Policies: Study

According to Mozilla, the data collected reveals how YouTube's recommendation algorithm pushes harmful, inappropriate and misleading content online.

By - Archis Chowdhury | 8 July 2021 9:14 PM IST

A recent study by non-profit organisation Mozilla Foundation found that video hosting giant YouTube's recommendation algorithm promotes disturbing and hateful content, which would often violate the platform's very own content policies.

According to the study, 71 per cent of the videos that the volunteers reported as regrettable were found to be actively recommended by YouTube's algorithm. It also found that almost 200 videos recommended by YouTube to the volunteers were later removed, included several deemed by the platform to violate its own policies, but had cumulatively garnered over 160 million views before being taken down.

Mozilla collected data 37,380 YouTube users, who volunteered to share their data of regrettable experiences arising from following recommendations by YouTube's algorithm. The data was provided through RegretsReporter, a browser extension and crowdsourced research project, where users could report their regrettable experience.

"YouTube needs to admit their algorithm is designed in a way that harms and misinforms people," Brandi Geurkink, Mozilla's Senior Manager of Advocacy, said in a blogpost uploaded by the foundation.

"Our research confirms that YouTube not only hosts, but actively recommends videos that violate its very own policies. We also now know that people in non-English speaking countries are the most likely to bear the brunt of YouTube's out-of-control recommendation algorithm," he added.

YouTube Recommendations: A Rabbit Hole

According to Mozilla, the data collected reveals how YouTube's recommendation algorithm pushes harmful, inappropriate and misleading content online.

Recommended videos were found to be 40 per cent more likely to be reported, than those that were searched. Furthermore, reported videos were found to perform well on the platform - they drew 70 per cent more views per day than other videos watched by the volunteers.

One of the volunteers was recommended a misogynistic video titled "Man humiliates feminist in viral video", after watching a video about the United States military. Another volunteer was watching a video about software rights, and the algorithm recommended him a video about gun rights. Yet another volunteer was recommended a highly sensational conspiracy theory video about former US President Donald Trump, while watching an Art Garfunkel music video.

The study also found the algorithm the treat people unequally based on location. Countries not having English as a primary language had 60 per cent higher rate of reporting as opposed to English-speaking countries.

This was found to especially true for pandemic-related reports. Among reported videos in English, only 14 per cent were pandemic -related. The number for report non-English videos is at 36 per cent.

The study made the following recommendations to YouTube on how to improve its recommendation algorithm"

  • Platforms should publish frequent and thorough transparency reports that include information about their recommendation algorithms
  • Platforms should provide people with the option to opt-out of personalized recommendations
  • Platforms should create risk management systems devoted to recommendation AI
  • Policymakers should enact laws that mandate AI system transparency and protect independent researchers

An Industry-Wide Issue

This is not the first time that concerns have been raised about the algorithm of a big-tech company.

In August 2020, advocacy group Avaaz did a year-long study of health-related misinformation on Facebook and found a similar pattern.

Also Read: Health Misinformation Racked Up Billions Of Views On Facebook: Report

The report - titled, "Facebook's Algorithm: A Major Threat to Public Health" - mentioned that the top 10 health misinformation spreading websites had four times the views as equivalent content on websites of top 10 leading health institutions like the World Health Organisation (WHO) and Centre for Disease Control and Prevention (CDC).

It revealed that Facebook's efforts in minimising the spread of such health-related misinformation were outperformed by the amplification of such content by Facebook's own algorithm.

Tags: