OpenAI Takes Steps to Protect Teens While Enhancing Adult AI Experience
OpenAI has encountered serious cases of AI misuse by teenagers and is introducing new measures to prevent such incidents. The company is developing ways to identify users’ ages so that if someone is under 18, their ChatGPT experience and functionality will differ from the standard version. This decision comes after instances where minors were offered potentially harmful information by the AI. Rather than attempting to retrain the AI for safer responses to teens, OpenAI is opting to limit their access.
At the same time, OpenAI is reaffirming its commitment to adult users, allowing ChatGPT to provide content—including potentially self-harming material—labeled for “educational purposes” when interacting with adults.
New Safety Features for Teens
OpenAI will implement age-prediction technology that evaluates user behavior for teen-like traits. If detected, ChatGPT will automatically switch to a version tailored for users under 18. Changes will include stricter content policies, parental controls to monitor interactions, and additional safety measures.
Since age prediction is not foolproof, OpenAI will also offer an option to verify age with official ID, despite potential privacy concerns. Teens discussing sensitive topics like suicide or engaging in inappropriate interactions with ChatGPT will face heightened scrutiny. In such cases, OpenAI may first contact parents, and if necessary, alert relevant authorities.
Focus on Adults and New Features
OpenAI and CEO Sam Altman are also developing ChatGPT-5, which some users feel has become too robotic and lost its human touch. Recently, the company launched the affordable ChatGPT Go plan in India at ₹399 per month, offering access to GPT-5, image generation, higher query limits, and other premium features.








