Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available

Support

Explore

HomeNo Image is Available
About UsNo Image is Available
AuthorsNo Image is Available
TeamNo Image is Available
CareersNo Image is Available
InternshipNo Image is Available
Contact UsNo Image is Available
MethodologyNo Image is Available
Correction PolicyNo Image is Available
Non-Partnership PolicyNo Image is Available
Cookie PolicyNo Image is Available
Grievance RedressalNo Image is Available
Republishing GuidelinesNo Image is Available

Languages & Countries :






More about them

Fact CheckNo Image is Available
LawNo Image is Available
ExplainersNo Image is Available
NewsNo Image is Available
DecodeNo Image is Available
Media BuddhiNo Image is Available
Web StoriesNo Image is Available
BOOM ResearchNo Image is Available
Deepfake TrackerNo Image is Available
VideosNo Image is Available
Explainers

Meta's AI Prevents Suicide In Lucknow: How Does It Detect Self-Harm?

The AI tool, trained to recognise keywords and patterns linked to self-harm, also evaluates comments and past posts to assess urgency.

By -  Hera Rizwan |

6 Sep 2024 12:18 PM GMT

In a recent incident, Meta helped save the life of a woman in Lucknow who was attempting suicide. Reportedly, the 21-year-old, distressed after being abandoned by her husband, posted a video on Instagram showing herself with a noose around her neck.

When the video went viral, Meta alerted the Social Media Centre at the Directorate General of Police. The police quickly responded, reached the location, and intervened to prevent the suicide. Afterward, the woman was taken out of the room and counseled by women police officers.

This is not the first time that Meta’s AI systems have been instrumental in saving lives. There have been previous reports of similar interventions, where distressing posts flagged by AI have prompted timely responses from law enforcement agencies.

How effective is Meta’s AI in detecting suicide patterns?

Kota, a major coaching hub in Rajasthan, is known for preparing students for competitive exams like the IIT-JEE and NEET. However, the city's intense academic pressure has led to a worrying rise in student suicides.

In June, a 16-year-old boy in Kota Rural was saved after uploading two Instagram reels expressing suicidal intent. Upon receiving an alert from Meta, the police quickly dispatched a team to his residence and arranged counseling for both him and his parents.

Similarly, a coaching student in Kota from Jhunjhunu district posted a concerning message on Facebook. Police reached out to the student in Kota and his parents in Jhunjhunu, and the student was given counseling by a licensed psychiatrist.

In 2021, a 23-year-old from Mumbai attempted to live-stream his suicide on Facebook, inadvertently leading to his rescue. Meta team in Ireland alerted the Mumbai Police, who quickly broke into his home and transported the unconscious man to the hospital within an hour of his attempt.

In 2020, West Bengal Police were also notified by Facebook about a youth’s live-streamed suicide attempt with a sharp weapon. The police intervened after being informed by Facebook, and they alerted the youth's unsuspecting father. 

With millions of users creating enormous amounts of data daily through posts, texts, and videos, manual monitoring is infeasible. Therefore, Meta uses artificial intelligence (AI) to spot potential issues and identify signs of trouble.

How Meta's AI detects signs of suicide?

According to its Safety Centre, since 2006—just two years after Facebook's launch in 2004—the tech giant has been collaborating with experts in suicide prevention and safety to support users across Meta platforms.

Initially, the team provided support by connecting individuals to local authorities, helplines, or non-governmental organisations. In 2018, Meta enhanced its approach by incorporating AI and machine learning to better identify potential self-harm situations.

Alluding to the introduction of the AI tool, a 2018 Meta blog read, “In the past, we’ve relied on loved ones to report concerning posts to us since they are in the best position to know when someone is struggling. However, many posts expressing suicidal thoughts are never reported to Facebook, or are not reported fast enough.”

In 2017, Facebook introduced a machine-learning model to detect suicide-related keywords like "kill," "goodbye," and "depressed," based on expert input. However, these words can also be used in non-harmful contexts, leading to false positives, which required Meta’s community operations team to filter manually.

The AI tool was trained to detect suicidal patterns for better accuracy. The tool analyses comments, looking for phrases like “tell me where you are” for serious cases or “I’m here for you” for less urgent ones. It also checks patterns in previous posts and their timing to assess if the user is in immediate danger.

When self-harm is reported or flagged, Meta's community operations team reviews it and takes action. If there is no immediate danger, they connect the user with support services, like helplines or counseling. In urgent cases, local authorities, such as the police, are notified right away.

Meta also uses AI to prioritise the order its team reviews reported posts, videos and livestreams. The blog post reads, "It also lets our reviewers prioritise and evaluate urgent posts, contacting emergency services when members of our community might be at risk of harm. Speed is critical."