Character.AI Rolls Out Teen-Focused Safety Features Following Lawsuit Allegations
Character.AI has announced plans to launch parental controls for teenage users and outlined recent safety measures, including a separate large language model (LLM) for users under 18.
The announcement follows media scrutiny and two lawsuits alleging the platform contributed to self-harm and suicide.
The teen LLM will enforce stricter limits on bot responses, particularly around romantic content, and will aggressively block sensitive or suggestive outputs.
It also aims to detect and block inappropriate user prompts. If references to suicide or self-harm are detected, users are directed to the National Suicide Prevention Lifeline.
Additionally, minors will no longer be able to edit bot responses, a feature that could bypass restrictions.
Character.AI is also working on features to address concerns about addiction and confusion between bots and humans, issues raised in the lawsuits.
Character.AI Faces Fresh Legal Action Over 'Harmful' Messages Sent To Teen
Click here