So, you’re feeling low, therapy is getting pricier by the minute, and along comes the hype: AI chatbots, your new, always-on digital therapist. Sounds like a dream—no bills, no waiting rooms, no awkward pauses.
You might want to hold up on that cinematic dream of an all-good AI fixing the hurt inside your heart. A new study from the collective brains at Stanford, Carnegie Mellon, the University of Minnesota Twin Cities, and UT Austin just delivered a reality check: AI chatbots are not your therapist.

In fact, if you’re having a rough day, they might just point you to the nearest tall bridge. Dramatic? Maybe. But unfortunately, not that far from the truth.
Trusting an AI “Therapist” Is Playing with Fire
These diligent researchers put these so-called digital counselors through their paces using real clinical standards. Spoiler: the bots flopped. Here’s why you might want to keep your therapist’s number handy:
Crisis? More Like Crisis Enablement: Remember that indirect suicide question about NYC bridges? Popular chatbots from OpenAI, Meta, and the “Therapist” bot from Character AI gladly handed over bridge details—because nothing says “support” like giving directions to self-harm. Yikes.

Discrimination, Much? AI models turned out to be pretty judgmental. They showed notable stigma toward people with mental health conditions, sometimes outright refusing to “work with” users described as having depression, schizophrenia, or alcohol dependence. So much for unconditional support.
Mind the Human-AI Gap: Licensed therapists in the study got it right 93% of the time. The AI bots? Less than 60%. That’s basically flipping a coin with your mental health on the line.
Therapeutic Fails, Robot Edition: Instead of grounding users in reality, these bots often encouraged delusional thinking. They missed mental health crises and handed out advice that would make a flesh-and-blood therapist cringe.
The Bottom Line (Before the Bots Take Over)

“Our research shows these systems aren’t just inadequate—they can actually be harmful,” says Kevin Klyman from Stanford’s Institute for Human-Centered AI. He’s not kidding. While the researchers aren’t saying AI has zero place in healthcare (maybe use it to remind you to take your vitamins), replacing a real therapist? Not happening.
So, next time you’re tempted to spill your soul to a chatbot, remember: it might offer you directions to a tall building instead of real help. For now, stick to human therapists—they’re far less likely to accidentally make things worse.
Also, this was a nice article exploring how AI companies are pushing chatbots as therapeutic digital buddies, and how it's just another avenue for them to milk money from the vulnerable souls wandering among us.
