Overview: A patient finds ChatGPT in a therapy session
A recent report described a patient who accidentally discovered their therapist using ChatGPT during an online counseling session. The therapist was typing into a chat box and reading responses from the model in real time. This led the client to question privacy, trust, and whether the therapist was relying on an automated system to guide care.
The central actors in this story are the patient, the therapist, and the AI tool, ChatGPT. ChatGPT is a conversational artificial intelligence developed by OpenAI. Mental health professionals are increasingly experimenting with AI for tasks such as note taking, drafting treatment plans, creating handouts, and helping with administrative work.
Why this matters to ordinary people
Therapy depends on trust and confidentiality. When a client learns that an AI system is involved and was not disclosed, they can feel betrayed. That reaction connects to real concerns about data privacy, the accuracy of AI responses, and professional responsibility. These concerns affect anyone who sees a therapist, or who might in the future, including people covered by health privacy rules such as HIPAA in the United States.
How therapists are actually using AI now
Therapists report using AI tools in several practical ways. Many of these uses are aimed at saving time or improving documentation. Common uses include:
- Note taking and session summaries for clinical records.
- Drafting treatment plans and progress notes.
- Creating psychoeducational materials and worksheets for clients.
- Suggesting phrasing for sensitive topics to avoid triggering language.
- Managing scheduling, billing, and other administrative tasks.
Some clinicians use AI behind the scenes to brainstorm interventions or to check language. Others interact with AI during or just after sessions to draft a message or idea they plan to review before sending to a client.
Potential benefits
AI tools like ChatGPT can make some parts of clinical work faster and more consistent. Potential benefits include:
- Efficiency gains, freeing time for direct care.
- Improved documentation, which can support continuity of care.
- Help with wording on complex or sensitive issues, which can reduce clinician stress.
- Quick generation of educational materials, making evidence based resources more accessible.
- Support for clinicians in underserved areas, which could expand access to services.
Harms and risks to watch for
At the same time, there are clear harms to consider. These include:
- Confidentiality breaches. If session content is fed into an online AI service without proper safeguards, private information could be stored or used in ways the client did not consent to.
- Inaccurate or inappropriate advice. AI can generate confident but incorrect information, or language that is insensitive to trauma histories.
- Reduced therapist accountability. If clinicians rely on AI outputs without sufficient oversight, responsibility for clinical decisions can become blurred.
- Triggering or retraumatizing clients. AI responses are not trained to detect every nuance of trauma, and a single poorly worded suggestion can cause harm.
- Technical mishaps. Glitches, data leaks, or misrouted messages can expose sensitive information.
Privacy, legality, and professional rules
Legal and regulatory issues are central. In the United States, HIPAA governs how protected health information is handled by covered entities. If a therapist uses an online AI service that stores or processes identifiable session content, that may raise HIPAA compliance questions.
Licensure boards also require clinicians to act within professional standards. If therapists use AI to inform diagnosis or treatment without appropriate oversight, they may face disciplinary action or malpractice risk. Vendor contracts and data use policies vary. Many AI tools are not designed specifically for clinical use, and default settings can send data to external servers.
Transparency and informed consent
One common gap is disclosure. Many clients do not know whether AI is used in their care. Informed consent means explaining what tools are used, how data is stored, and any risks involved. Transparency helps preserve trust and gives clients the right to refuse or to set boundaries.
Best practice recommendations for clinicians
Clinicians who choose to use AI can follow practical safeguards. Professional associations and privacy experts suggest these steps:
- Ask whether the chosen AI vendor signs a business associate agreement for HIPAA compliance when applicable.
- Avoid pasting identifiable client information into general purpose public AI tools unless the platform explicitly supports protected health data under contract.
- Use AI for drafting or brainstorming only, and always review and edit outputs before sharing with clients.
- Document when and how AI was used in the clinical record, with rationale and oversight notes.
- Obtain explicit, written consent from clients before using AI in ways that involve their data.
- Prefer local or enterprise models that keep data within secure systems, when available.
- Establish supervision and peer review to check AI-influenced clinical decisions.
Advice for clients who are worried or curious
If you are a client and you suspect or learn that your therapist used AI without telling you, here are steps to consider:
- Ask directly. A simple question can clarify whether AI was used and how.
- Request an explanation of what data was shared, and how the clinician protects confidentiality.
- Ask for written policies and for the clinician to document any use of AI in your record.
- If you feel unsafe, consider switching providers. Your emotional safety comes first.
- Report concerns to the clinician’s licensing board if privacy or professional standards were violated.
Red flags to watch for
- Nonanswers about whether AI was used.
- Therapists refusing to explain how your data was handled.
- Unexpected messages or documents that include parts of your session verbatim without prior consent.
- Clinical notes that seem generic, formulaic, or disconnected from what you discussed.
Policy and industry needs
Industry and regulators are under pressure to set clearer rules. Key steps that experts and advocates call for include:
- Clearer guidance from licensure boards about acceptable AI use in clinical practice.
- Stronger vendor accountability, including audited logs of how models handle health data.
- Standards for explainability and traceability so clinicians can show how AI influenced a decision.
- Research into clinical outcomes when AI supports mental health care, to measure benefits and harms.
- Updating consent forms and privacy notices to cover AI use in plain language.
Key takeaways
- Therapists are increasingly using tools like ChatGPT for notes, planning, and education, but many clients are not informed.
- AI use can improve efficiency, but it also creates risks of privacy breaches, inaccurate advice, and loss of trust.
- Informed consent, clear documentation, vendor safeguards, and professional oversight reduce risk.
- Clients should feel empowered to ask about AI use and to set boundaries if they are uncomfortable.
FAQ
Q. Is it illegal for a therapist to use ChatGPT?
A. Not automatically. Legality depends on how the tool is used, whether protected health information is disclosed without safeguards, and whether the clinician complies with professional standards and applicable health privacy laws.
Q. Could my session data be used to train AI?
A. It could, if the clinician pastes data into a general purpose AI service that uses user inputs to improve models. Responsible practice requires knowing vendor policies and avoiding that risk with identifiable data.
Q. What if my therapist says AI helps them be better?
A. That is possible. AI can assist with routine tasks and drafting materials. You can ask what tasks AI handles, how your data is protected, and whether you can opt out.
Conclusion
The incident of a patient discovering ChatGPT in a live therapy session raises real questions for patients, clinicians, and regulators. AI can be a helpful tool for clinicians, but the benefits do not remove the need for transparency, consent, and strong data protections. Patients should be able to know whether AI is involved in their care, to understand the risks, and to make informed choices.
For therapists, the path forward is practical. Use AI thoughtfully, keep client data safe, document use, and talk with clients openly. For policy makers and vendors, the next steps are clearer standards, contracts that protect health data, and research that measures clinical impact. With those safeguards, AI can support mental health care while preserving the trust that therapy depends on.







Leave a comment