Character.AI вводит возрастные ограничения на общение с ИИ после трагедии Translation: Character.AI Implements Age Restrictions for Interaction with AI Following Tragedy

The reason behind this decision is a series of lawsuits and tragic events.

The communication platform with AI characters, Character.AI, will restrict access for users under 18. This measure comes in light of the ongoing legal actions.

“Deciding to disable the unrestricted communication feature was not easy for us, but we believe it is the only right choice given the current circumstances,” the project stated.

Until November 25, interactions with bots for minors will be limited. Initially, the limit will be set at two hours per day, which will gradually decrease to a complete suspension of the chat feature.

Character.AI will provide teenagers with alternative creative formats like video creation, storytelling, and streaming utilizing AI characters.

To verify user age, the platform will implement a comprehensive system that combines its own technologies with solutions from third-party providers like Persona.

In parallel, the platform will establish and fund a nonprofit AI Safety Lab. This initiative aims to develop new safety standards for the entertaining functions of artificial intelligence.

Character.AI is facing multiple lawsuits, including one from the mother of a 14-year-old who took his life in 2024 due to an obsession with a bot character from the show «Game of Thrones.»

Following numerous complaints, the platform has already introduced parental controls, usage notifications, and filtered character responses.

In August, OpenAI shared its plans to address deficiencies in ChatGPT concerning “sensitive situations.” This decision was also prompted by a lawsuit from a family that accused the chatbot of being involved in a tragedy concerning their son.

Meta took similar measures, altering its approach to training AI chatbots with a focus on adolescent safety.

Later, OpenAI announced its intent to redirect private conversations to reasoning models and incorporate parental controls. In September, the company launched a teen version of the chatbot.

On October 27, OpenAI released data showing that approximately 1.2 million out of 800 million weekly active users discuss suicidal thoughts with ChatGPT.

Additionally, 560,000 users show signs of psychosis or mania, while 1.2 million exhibit heightened emotional attachment to the bot.

“We recently updated the ChatGPT model to better recognize users in moments of stress and provide support. Beyond standard safety metrics, we have included assessments of emotional dependency and non-suicidal crises in our baseline testing — this will become a standard for all future models,” company representatives noted.

However, many believe the outlined measures may be insufficient. Former OpenAI security researcher Steven Adler warned of the dangers of a race in AI development.

According to him, the company behind ChatGPT virtually lacks evidence of genuine improvements in protecting vulnerable users.

“People deserve more than just words about resolving safety issues. In other words: prove that you have actually done something,” he stated.

Adler praised OpenAI for providing some information about user mental health but urged them to “go further.”

In early October, a former startup employee analyzed an incident involving Canadian Allan Brooks, who fell into a delusional state after ChatGPT consistently supported his belief in discovering revolutionary mathematical principles.

Adler found that OpenAI’s own tools, developed in collaboration with MIT, would have identified over 80% of the chat responses as potentially harmful. He believes the startup hasn’t effectively employed these safeguards in practice.

“I want OpenAI to put in more effort to do the right thing before pressure comes from the media or lawsuits,” the expert wrote.

Lastly, it is worth mentioning that a study in October revealed signs of artificial intelligence degradation due to social media.