AI giants OpenAI and Anthropic are implementing new technological measures to identify underage users and enhance their safety. OpenAI is prioritizing youth safety by updating ChatGPT’s “Model Spec” guidelines for how it should interact with individuals aged 13 to 17.
New measures from OpenAI and Anthropic
According to the new rules, ChatGPT always prioritizes safety when goals such as “maximum intellectual freedom” conflict with security in its interactions with young users.

The system now adopts a warm and respectful approach towards young people, rather than adopting a condescending attitude or treating them like adults. Furthermore, the AI encourages users to seek support from real-world sources by guiding them towards offline relationships.
In addition to this policy change, the company also announced it is working on a new model that can estimate users’ ages. Developed by OpenAI, this system automatically activates youth protection shields when it detects signs that a user may be under 18.
Adult users who were incorrectly flagged are now given the opportunity to verify their age. These moves come particularly in the wake of increasing legal pressure and several lawsuits against OpenAI, stemming from concerns about the mental health effects of AI bots. The company is taking a more aggressive stance, directing young people involved in risky conversations to emergency services or crisis centers.
On the other hand, Anthropic is tightening its policy of completely banning users under 18 from chatting with Claude. The company is developing a system that analyzes “subtle cues” in conversation texts to predict whether a user is of legal age and disables accounts that violate the rules.
Anthropic is also training its models to reduce “sycophancy,” the phenomenon where AI reinforces harmful thoughts by agreeing with everything a user says. Company data shows that the Haiku 4.5 model performs best in this regard, but the balance between the models’ friendly approach and honesty continues to be improved.

