When Your “Therapist” Is a Chatbot, Don’t Expect Confidentiality: Sam Altman Raises Alarm on AI Privacy Gaps
Friday, July 25, 2025
Let’s say the hard part out loud. More people than ever are turning to ChatGPT not just for directions, recipes, or resume tips—but for emotional support.
It’s 2025, and your therapist might be a chatbot.
But here’s the catch: Sam Altman, CEO of OpenAI, says those heartfelt confessions aren’t protected by the same legal privileges as your real therapist’s notepad.
In a conversation this week with comedian and podcast host Theo Von, Altman laid it out plainly: “If you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, we could be required to produce that,” he said. “And I think that’s very screwed up.”
It is.
And it’s something that, until recently, didn’t seem like an urgent legal gray zone.
But now, with millions of users treating ChatGPT like an always-on therapist, life coach, and digital diary, the stakes have changed Significantly.
Therapy Without Privilege
There’s a reason therapist offices still come with Kleenex boxes and confidentiality clauses. Under current law, your sessions with a licensed therapist—or your conversations with a doctor or lawyer—are protected by what’s known as privileged communication. It means that, in most cases, those conversations are off-limits in court. They can’t just be subpoenaed because someone’s feeling curious or litigious.
But AI isn’t covered by that.
“There’s doctor-patient confidentiality, there’s legal confidentiality,” Altman said. “We haven’t figured that out yet for when you talk to ChatGPT.”
Translation: If you pour your heart out to ChatGPT about your marriage, your anxiety, your trauma, or even your illegal behavior, that chat could become evidence if someone with legal standing comes knocking.
A Growing Ethical Mess
Altman isn't throwing his hands in the air. In fact, he thinks the legal system needs to move quickly. “There should be the same concept of privacy for your conversations with AI that we do with a therapist,” he said. “And it should be addressed with some urgency.”
Why the urgency now?
Because millions of users—especially younger ones—are already treating ChatGPT like a therapist. The app is quick, free, and nonjudgmental. It doesn’t sigh. It doesn’t have a waiting list. It doesn’t even blink when you tell it you hate your partner or fear your kid might be depressed.
That sense of safety is real—but also false.
“No one had to think about that even a year ago,” Altman said. “Now it’s a huge issue.”
What's Really Stored—And For How Long?
Many users assume their chats vanish into the ether. But unless you’re actively disabling chat history, OpenAI staff canread your conversations. That’s part of how the model improves and how misuse is detected. Even when you delete a conversation, OpenAI’s policy states that chats may be retained for up to 30 days—and potentially longer if needed for “legal or security reasons.”
That’s not just theoretical. In June, The New York Times and other plaintiffs requested a court order demanding OpenAI retain all user logs, including deleted conversations, as part of a high-stakes copyright lawsuit. OpenAI is currently appealing that order.
If you’ve ever typed something into ChatGPT that you wouldn’t want read in open court—or published in discovery—this is your wake-up call.
Beyond Chatbots: Kids and Algorithmic Exposure
Elsewhere in the interview, Altman voiced concerns about a different kind of psychological vulnerability: kids and social media. A new father as of February, he shared his worries about how addictive digital platforms are rewiring childhood itself.
“I think a lot about the mental health consequences of this stuff,” he admitted.
It’s worth noting that this concern echoes growing research about algorithm-driven anxiety and attention dysregulation in children (Twenge & Campbell, 2018; Haidt, 2023). The same addictive mechanics that power TikTok’s For You page are being layered onto emerging AI platforms—and children aren’t exactly equipped with the nuance to distinguish between a friend, a therapist, and a responsive language model trained on Reddit.
So, Should You Break Up with Your AI Therapist?
Not necessarily. But you should use it with eyes wide open.
ChatGPT is a powerful tool, not a legally protected confidant. It can reflect empathy, but it can’t grant you privacy rights.
It can generate insight, but it won’t raise a HIPAA flag when you mention suicidal thoughts or past abuse.
And unless laws evolve, it may never do so.
Until then, treat your ChatGPT sessions the way you'd treat a conversation on a monitored work email: insightful, yes—but not sacred.
Be Well, Stay Kind, and Godspeed.
REFERENCES:
Haidt, J. (2023). The anxious generation: How the great rewiring of childhood is causing an epidemic of mental illness. Penguin Press.
Twenge, J. M., & Campbell, W. K. (2018). The narcissism epidemic: Living in the age of entitlement. Atria Books.
OpenAI. (2024). Privacy policy and data retention terms. https://openai.com/policies/privacy-policy
The New York Times Co. v. OpenAI, et al. (2024). U.S. District Court.