OpenAI Guidelines to Prioritise ‘Teen Protection’ Over Helpfulness: What It Means


New Delhi । OpenAI has updated its safety guidelines to place greater emphasis on teen protection, signalling a shift in how ChatGPT interacts with teenage users. Under the revised approach, the AI chatbot is now instructed to encourage teenagers to seek support from trusted adults rather than positioning itself as a replacement for human relationships or professional help.

According to the updated guidelines, when teenagers seek advice on sensitive issues—such as mental health, emotional distress, or personal challenges—ChatGPT will guide them toward parents, teachers, school counselors, or other trusted adults. The goal is to ensure that young users receive appropriate real-world support and do not become overly dependent on AI for guidance.

OpenAI stated that while ChatGPT can provide general information and emotional reassurance, it should not act as a therapist or authority figure for minors. The company aims to strike a balance between being helpful and maintaining clear boundaries, especially for vulnerable age groups.

The move comes amid growing global concerns over the impact of artificial intelligence on children and adolescents, particularly around mental health, online dependency, and digital well-being. Experts have welcomed the step, noting that it reinforces responsible AI use and safeguards young users from potential harm.

With this update, OpenAI is aligning its systems more closely with child safety standards, ensuring that technology complements—rather than replaces—human care, guidance, and professional support.


Leave a Reply