You’re scrolling late at night, feeling anxious, and you see an ad for an AI therapist. Available 24/7. No judgment. No waiting lists. Just type your problems and get instant support. It sounds… tempting. Especially when getting an appointment with an actual therapist takes weeks and costs money you might not have.
But here’s the question everyone’s asking: Is AI therapy actually helpful? Or is it a convenient substitute that misses what therapy is actually about?
The rise of digital mental health tools raises real questions. Can algorithms provide genuine support? What are the risks? When might these tools be useful, and when are they actively harmful? And what does the existence of AI therapy tell us about how broken our mental health system actually is?
Understanding the role of these tools isn’t about being anti-technology or pretending AI can’t be useful. It’s about being clear-eyed about what these tools CAN do, what they CAN’T do, and what happens when we treat them as replacements for human connection and clinical expertise.
Can You Use ChatGPT as a Therapist?
Technically, you CAN type your problems into ChatGPT. People do. But should you use ChatGPT as an AI therapist? That’s a different question entirely.
ChatGPT is not designed or trained as therapeutic software. It’s a general-purpose language model. While it can provide empathetic-sounding responses and general mental health information, it’s not a clinical tool. It’s not HIPAA compliant. It’s not designed for crisis intervention. It has no ethical framework specific to mental health care.
The risks of using ChatGPT this way:
Your data isn’t protected. Everything you share with ChatGPT can be used to train future models. Your deepest vulnerabilities, your trauma, your struggles… that’s not confidential. Using ChatGPT this way means giving up privacy that’s fundamental to real therapy.
It can confidently give terrible advice. ChatGPT generates plausible-sounding responses, but it doesn’t actually know what’s true or helpful. It might suggest things that sound reasonable but are clinically contraindicated for your specific situation.
It can’t recognize when you need more help. If you’re in crisis, actively suicidal, experiencing psychosis… ChatGPT won’t know. It won’t escalate. It won’t call for help. Using general AI for mental health support when you need actual intervention is dangerous.
It reinforces isolation. One of the things therapy does is reconnect people to human relationship. Using ChatGPT moves you further from human connection, which is often exactly what people struggling with mental health most need.
Are there purpose-built tools that are better?
Sort of. Apps like Woebot, Wysa, or Replika are specifically designed as mental health support tools. They have some clinical framework, more guardrails, and are HIPAA-compliant in theory. But they still face fundamental limitations… they can’t truly understand, they can’t provide relational healing, and they can’t handle complexity.
These purpose-built tools are better than using ChatGPT as an AI therapist, but they’re still not replacements for human therapy. At best, they’re supplements or bridges.
What Percent of People Use AI as a Therapist?
The numbers on AI therapy usage are still emerging, but the growth is notable and concerning.
Recent surveys suggest that approximately 15-20% of people have tried some form of AI therapist support. Among younger adults (18-29), that number jumps to around 30-35%. The trend is accelerating rapidly.
Why the growth?
Accessibility. These tools are available instantly. No waiting lists. No scheduling. No commute. For people in crisis at 3am, that immediacy is appealing.
Cost. Many AI therapist tools are free or significantly cheaper than human therapy. When therapy costs $150+ per session and insurance doesn’t cover it, a free app becomes attractive.
Reduced stigma. Some people find it easier to open up to an AI therapist than a human. There’s less fear of judgment, no worry about burdening someone, no social anxiety about the interaction itself.
Shortage of providers. There aren’t enough therapists to meet demand. Waitlists stretch for months. People turn to these tools not because it’s ideal, but because it’s available.
What the data shows:
Most people using these tools are using it as a supplement, not a replacement. They’re accessing these tools between human therapy sessions or while on waitlists for traditional care.
The majority of users report some benefit from AI mental health tools, particularly for basic psychoeducation and coping skills. But satisfaction is significantly lower than with human therapy.
Concerning pattern: About 25% of people who start with an AI therapist never transition to human care, even when they need it. AI becomes a substitute rather than a bridge.
Is AI Therapy FDA Approved?
No. And this is important to understand because it reveals just how unregulated this space actually is.
The FDA doesn’t regulate psychotherapy the way it regulates medications or medical devices. Therapy itself isn’t “FDA approved” because it’s a professional service, not a product. So technically, neither human therapy nor AI therapy would be “FDA approved” in that traditional sense.
However… when we’re talking about AI therapy tools, the question becomes: Is it regulated at ALL?
The regulatory landscape is murky:
Some apps register as “wellness” tools rather than medical devices, which means they face minimal oversight. They’re not required to prove efficacy or safety the way medical treatments are.
There’s no standardization. One app might have clinical oversight and evidence-based frameworks. Another might be built by tech people with no mental health training. There’s no way for consumers to know the difference.
Privacy regulations vary. HIPAA applies to healthcare providers, but many apps operate in gray areas where it’s unclear if they’re covered entities. Your data might not have the protections you assume it does.
The FDA is starting to look at mental health apps and AI tools, but regulation is lagging far behind the technology. Right now, these tools operate in a largely unregulated space with minimal accountability.
What this means for users:
When you use these tools, you’re essentially trusting that the company built it responsibly, that the algorithms won’t give harmful advice, and that your data is protected. But there’s no regulatory body ensuring any of that.
If something goes wrong… if the AI gives advice that leads to harm… there’s no clear accountability. No licensing board to file a complaint with. No malpractice insurance. Just terms of service that probably limit the company’s liability.
When Might These Tools Be Useful?
Let’s be practical. AI therapy tools exist. People use them. When might an AI therapist actually be helpful?
As psychoeducation. Learning about mental health concepts, coping skills, or understanding your symptoms… these tools can provide information that might be helpful as a starting point.
Between human therapy sessions. If you’re already working with a real therapist and need support between appointments, AI tools could provide supplemental coping strategies or space to process thoughts.
When human therapy isn’t immediately accessible. If you’re on a waitlist, or it’s the middle of the night, or you’re in a location with no providers… AI support might be better than nothing as a temporary measure.
For very specific, straightforward issues. If you need help with a concrete anxiety management technique or want to journal with prompts, AI can deliver that without needing human clinical judgment.
When NOT to use these tools:
- You’re in crisis or having thoughts of self-harm
- You have complex trauma that requires relational healing
- You’re dealing with serious mental illness
- You need someone who can actually see you and respond to full context
- You need accountability and human ethical responsibility
The Bigger Question
Here’s what the rise of the AI therapist actually tells us: our mental health system is so broken that people are turning to algorithms for help because they can’t access human care.
The waitlists are too long. The costs are too high. The shortage of providers is real. So people are trying AI therapy not because it’s ideal, but because it’s available.
At Annapolis Counseling Center, we’re not afraid of technology. We use it where it’s helpful. But we also know that real therapy works through human connection. Through being genuinely seen, understood, and supported by another person who has training, ethics, and the capacity for authentic relationship.
These tools can help. It can provide some support, some information, some structure. But it’s not therapy. And we need to be honest about that distinction so people aren’t settling for AI when what they actually need is human connection and clinical expertise.
If you’re considering AI therapy because real therapy feels inaccessible, we get it. The system is frustrating. But don’t let convenience replace what you actually need. Keep looking for human care, even if AI fills the gap temporarily.
You deserve someone who actually sees you, not just an algorithm that simulates understanding. At Annapolis Counseling Center, we provide the human connection and clinical expertise that no AI can replicate. Because some things can’t be automated. And healing is one of them.