OpenAI has announced plans to introduce parental controls on ChatGPT within the next month, following mounting concerns about the chatbot’s potential role in cases of self-harm among teenagers.
The company said the new feature will allow parents to link their accounts with their children’s, restrict functions such as memory and chat history, manage how the chatbot responds, and receive alerts if signs of “acute distress” are detected during use.
“These steps are only the beginning. We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible,” OpenAI said in a blog post on Tuesday.
The move comes after legal action was filed against the company by the parents of 16-year-old Adam Raine, who alleged that ChatGPT contributed to their son’s suicide.
Similar lawsuits have previously targeted other AI chatbot platforms, including Character.AI, over claims of harmful advice given to minors.
While OpenAI did not directly link its decision to these lawsuits, it acknowledged that “recent heartbreaking incidents” had shaped its safety measures.
The company also admitted that existing safeguards—such as directing users to helplines and crisis support services—were most effective during short interactions, but could become unreliable in prolonged conversations.
“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions… We will continually improve on them, guided by experts,” an OpenAI spokesperson said.