Sam Altman Says ChatGPT Will Stop Discussing Suicide With Users It Estimates Are Under 18

What happened, and who said it

OpenAI CEO Sam Altman announced that ChatGPT will change how it responds to conversations about suicide when it believes the user is under 18. The company plans to build an age prediction system that estimates whether a user is a minor, and if so, restricts guidance on self harm and suicide. Altman published the plan in a blog post, and the announcement came just before a Senate hearing on AI harms and youth safety.

This is a high profile move. It ties together OpenAI, its widely used chatbot ChatGPT, and concerns from lawmakers about how AI interacts with teens. Policymakers, parents, clinicians, and developers will watch how OpenAI balances privacy, free expression, and safety as it updates chatbot behavior.

Plain language definition: what is the age prediction system

An age prediction system is a software model that tries to guess a user’s age from signals such as how they type, what they ask, and other usage patterns. OpenAI says it will use such a system to decide whether ChatGPT should avoid giving information about suicide or self harm to the person it thinks is under 18.

Why this matters to everyday people

Chatbots are used by many teens and adults for homework, mental health questions, and personal advice. When a major provider changes how the bot talks about suicide, it affects young people who seek help, parents who worry about safety, and clinicians who may rely on AI tools. The announcement raises practical questions about accuracy, privacy, and how to help someone in crisis.

Immediate implications

  • ChatGPT may refuse to provide detailed information about suicide to users it believes are under 18.
  • The system relies on inferred age rather than verified identity, which has different privacy and accuracy trade offs.
  • Lawmakers and regulators will likely scrutinize the approach during and after the upcoming Senate hearing.

How OpenAI explains the trade offs

OpenAI frames the change as an attempt to balance three goals, privacy, freedom of expression, and safety. The company says these goals can conflict. For example, protecting user privacy means avoiding intrusive identity checks, but that reduces certainty about a user’s age. Protecting safety may mean limiting certain conversations with minors, but such limits can also restrict access for adults.

Technical and ethical challenges

Age prediction from behavior is not perfect. There are several technical and ethical risks that matter for people using chatbots.

Main technical concerns

  • Accuracy and bias. Age models can make mistakes, and they may be less accurate for certain demographic groups. That can lead to false positives where adults are treated as minors, or false negatives where teens are treated as adults.
  • Circumvention. Users could try to change wording or behavior to avoid being labeled under 18, which may reduce the effectiveness of safety protections.
  • Edge cases. Young adults who are vulnerable, for example people with developmental differences or nonstandard language patterns, could be misclassified and miss needed support.

Privacy and consent issues

Estimating age from behavioral signals raises privacy questions. Users may not know the system is inferring their age, or what data it uses. Important questions include whether OpenAI will retain the behavioral data, how long it will keep it, and whether users can request deletion or opt out.

How this interacts with free expression and safety

Restricting information about suicide for minors is intended to reduce harm. At the same time, limiting conversations about suicide can block access to potentially helpful coping strategies or support for people who are actually adults. This is a tension between protecting vulnerable users and preserving legitimate access to information.

Legal and regulatory angle

The announcement is likely to draw attention from lawmakers and regulators who are focused on child safety online. The upcoming Senate hearing on AI harms and youth safety will examine how AI companies handle interactions with minors. Regulators that oversee child protection, consumer privacy, and health information may press for more transparency, audits, or legal limits on automated decisions about minors.

Practical effects for different groups

For teens and young people

  • Some teens who ask about self harm may get a different response than adults do, including more encouragement to seek immediate help from trusted adults or professionals.
  • Teens may try to avoid detection if they want more detailed information, which could lead them to other sources online that vary in reliability.

For parents and educators

  • Parents may see this as an additional safety layer for their children, but they also need to know how the system works and what it will and will not do.
  • Schools and counselors should review how they use AI tools and whether additional guidance or human oversight is necessary.

For clinicians and mental health services

  • Clinicians who use chatbots in practice should verify how the tool handles at risk users and plan for false classifications.
  • AI guidance should not replace emergency care or crisis intervention; services must maintain clear escalation paths to human responders.

For developers and third party services

  • Developers using OpenAI’s APIs should expect new policy requirements and possible API changes to enforce age based behavior.
  • Third party apps that rely on chatbots will need to consider consent, age verification options, and how to present safety messaging to users.

What to watch next

OpenAI has said it will move forward with the age prediction approach, but the details and timing remain important. Key things to monitor include:

  • Rollout timeline, including when the change will appear in ChatGPT for free and paid users.
  • Transparency reports that explain how the age model works, its accuracy, and measured harms or benefits.
  • Options for users to opt out or to verify age in less invasive ways, and how OpenAI retains or deletes behavioral data.
  • Independent audits or third party reviews of the age prediction system to check for bias and accuracy.
  • Regulatory responses and any legislative action following the Senate hearing on AI harms and youth safety.

Key takeaways

  • OpenAI announced that ChatGPT will avoid discussing suicide with users it estimates are under 18, as stated by CEO Sam Altman.
  • The plan uses an age prediction system that infers age from behavior and usage patterns rather than verified ID.
  • This approach highlights trade offs between privacy, freedom of expression, and safety, and it poses technical risks like misclassification and bias.
  • Parents, clinicians, developers, and regulators will need clear information about accuracy, data retention, and opt out choices.
  • Expect scrutiny at the Senate hearing and calls for transparency, audits, and possible regulation.

Short FAQ

Will ChatGPT stop all suicide related talks with minors
ChatGPT will change how it responds to users it estimates are under 18, aiming to limit detailed instructions about self harm. It will still give crisis resources and encourage contacting trusted adults or emergency services.

How will OpenAI know a user is under 18
OpenAI plans to use a model that guesses age from signals like phrasing and behavior. This is an estimate, not a verified identity check.

Can adults be misidentified as minors
Yes, there is a risk of false positives, where adults are treated like minors, and false negatives, where minors are treated like adults. Accuracy and bias are central concerns.

Will my data be saved for age prediction
OpenAI has not yet published full details about data retention. Users should look for transparency reports and privacy notices that explain what signals are used and how long they are kept.

Concluding thoughts

OpenAI’s decision to change ChatGPT responses for users it estimates are under 18 responds to real concerns about youth safety. The idea of using an age prediction model offers a less intrusive alternative to identity checks, but it brings trade offs in accuracy, fairness, and privacy. The upcoming Senate hearing will be a key moment for more public scrutiny and for policymakers to ask for details about how the system works and how harms will be measured and reduced.

For people who use chatbots, the short term takeaway is simple. Companies should be clearer about how they infer age, what protections are in place for people at risk, and how individuals can get human help quickly when they need it.

Leave a comment