Skip to content

Why Shouldn’t I Use AI for Psychotherapy?

Admin TPG • August 7, 2025

Why Shouldn’t I Use AI for Psychotherapy?

It’s no secret that many people have started turning to AI chatbots for emotional support and to seek solutions to personal problems.

Given AI’s 24/7 accessibility, the promise of “free therapy”, and the perceived anonymity of these AI chatbot functions, it is hardly surprising that AI engagement is supplanting some traditional person-to-person therapy sessions.

The barriers to traditional talk therapy and mental health support can be significant. People may take their personal struggles to AI because private psychotherapy and counselling seems inaccessible due to cost or other practical barriers such as time and distance. And some will feel safer engaging with a chatbot rather than facing the possible discomfort of opening up to a stranger—even though opening oneself to emotional challenge is itself key to the gains of psychotherapy.

However, AI “therapy” carries not only significant limitations but potentially major health and safety risks. For many reasons, human therapists remain indispensable. AI cannot supplant real, licensed, and extensively trained clinicians.

What Makes Professional Therapy Healthier than AI Support?

Genuine empathy, the capacity to listen deeply, and the commitment to meet another person with genuine curiosity cannot be replicated by AI, which brings scavenged information and mirroring to produce a facsimile of human interaction.

Because AI is programmed to reflect and even amplify what the human questioner brings to the interaction with AI, such interactions have inherent risk. AI can encourage the very thought patterns and behaviours one is seeking help with. A human therapist is mindful and discerning, and brings curiosity and ongoing assessment of vulnerability into all client interactions. This is crucial when people are struggling with mood, distorted self-perceptions, and maladaptive patterns taken on from early life experience.

A human therapist offers key interpersonal capacities that AI cannot: 

A live psychotherapist validates selectively, by drawing attention to and exploring potentially negative ideas and behaviour patterns with their client. Human therapists ideally practice with humility and challenge their clients thoughtfully, knowing they “don’t know what they don’t know”—which is something AI cannot do. ChatGPT doesn’t communicate uncertainty, the lack of which could be profoundly problematic in a therapeutic context. AI chatbots are not collaborative!

Most importantly, a real therapist will tailor each session to a unique, singular individual, modifying general assessment and treatment to that individual. 

What Could Go Wrong? Worst-Case Scenarios with AI

We all know about AI “hallucinations” – when a chatbot confidently just invents things.  First-hand stories and reports are accumulating that have proven AI to be a dangerous echo chamber when people are caught in periods of delusional thinking. 

Threads on the platform Reddit, for example, are places where individuals have posted descriptions of loved ones becoming unmoored after unboundaried interaction with AI. Interactions with artificial intelligence gone awry led people to think of themselves as chosen, as gods or gurus, or believe they’ve found new and better friends or partners in their AI chatbots.

 Individuals at particular risk include: 

Non-life-threatening struggles may become critically dangerous ones without the availability of a human professional for actual treatment, diagnosis, or personalized, empathetic support.

So, What Is AI Good For, Then?

While AI substitutes poorly for real therapy with a human therapist, it can be useful, if used with boundaries. Safe, useful, and healthy modes of using AI in the realm of personal concerns could include:

The accessibility of AI, its cost-effectiveness, and its 24/7 availability make it likely that AI will play an ongoing role in the field of mental health. Right now, the rule is user beware: there are few to no guardrails against the danger AI-based therapy poses. Sycophantic interactions with ChatGPT and other forms of AI cannot help individuals with meaningful and singular human challenges, particularly in an era of burgeoning mental health crises. Whether AI will figure as a useful adjunct in the field or serve up a Pandora’s Box of consequences to general human wellbeing, is not yet known. 

Recent Posts