Therapists Using ChatGPT: Five Core Dangers You Shouldn’t Ignore
Therapists using ChatGPT. That phrase alone should trigger alarms, but if you’re in the mental health matrix these days, you’re probably already close to the static. When trust is currency and your secrets are the payload, finding out your therapist is feeding them to an AI hits different. Here’s why this is more than a glitch — and why you should care.
1. Trust Implosion: When Your Confessions Become Prompts
Your therapist is the firewall between your rawest self and the outside world. So when a session turns into covert AI handoff — as it did for Declan, watching his pain become copy-pasted prompts — you get a breach. ChatGPT might spit out nice-sounding “insights,” but the relationship? Corrupted. Agentic AI in healthcare isn’t automatically evil, but hiding it is an instant trust nuke. When the human element is replaced by quietly looped-in algorithms, you’re not getting therapy; you’re starring in someone else’s Turing test.
2. Data Leaks and Privacy Sinkholes
Your secrets are premium black market goods. Therapists that blithely toss them into LLMs are trading your privacy for their own convenience. Even “anonymized” data put into generative AI platforms is only as secure as the weakest link — which, in healthcare tech, is often weaker than you think. Cognitive scaffolding with LLMs offers scary efficiency but also big holes you can drive a privacy lawsuit through.
3. The Faux-Human Factor: Losing Touch With the Real Deal
When Hope got a grief message from her therapist that began with “Here’s a more human, heartfelt version…”, the mask slipped. Suddenly, the empathy felt synthetic. There’s research (2025, PLOS Mental Health) showing people rate therapist messages lower when they suspect AI is involved — even if the message is perfectly crafted. It’s not just about what’s said, but who’s saying it — and why. AI’s best trick is mimicking care, but your brain can smell a fake handshake a mile away.
4. Boundary Blurring: Who Are You Really Talking To?
If you’re getting AI-generated wisdom in the middle of a session, who exactly is holding the space for you? The therapist or a machine trained on digital detritus from a billion anonymous souls? At what point does therapy become a chatbot with a human middle manager? If you want a bot, there are plenty. If you want a human, do not settle for a proxy. If you want to dive deeper on AI’s tendency to play nice and say what you want to hear, read more on sycophancy in LLMs.
5. Charging for Automation: The Ethics Blackout
It’s cute that Declan’s therapist cried after getting caught. But guess what? He still got billed — full freight — for a session quietly outsourced to silicon. Patients pay for human insight, not recycled AI wisdom. Sliding part of the job onto ChatGPT (especially in secret) is unethical at best, malpractice with a side of dystopia at worst.
- Transparency: Anything less than full disclosure is a betrayal. If your therapist is using AI, you should know.
- Consent: You should get a choice before your data is even whispered to a language model.
- Compensation: If you’re getting copy/paste therapy, your bill should reflect the automation discount.
Conclusion: What Next for Therapists Using ChatGPT?
AI in therapy isn’t all doom. There’s promise if it’s used openly and on your terms: helping draft summaries, writing follow-ups, organizing notes — with your explicit consent. But secrets make bad therapy and worse tech. If you even suspect your shrink is double-timing with a bot, ask directly. If they squirm, consider whether you want your healing crowdsourced by code.
For a deeper dive into the glossy promises (and brutal flaws) of today’s AI, don’t miss our AI hype index. And if you catch your therapist reading straight off ChatGPT again? Start billing them for your time. After all, you did most of the emotional labor.