When a young person in distress turns to an AI chatbot rather than a trained clinician, it underscores a global health crisis: mental health services are overstretched, underfunded, and inaccessible for many. In response, we’re seeing a surge in AI tools designed to close the gap. AI chatbots are now widely used to deliver mental […]
When a young person in distress turns to an AI chatbot rather than a trained clinician, it underscores a global health crisis: mental health services are overstretched, underfunded, and inaccessible for many.
In response, we’re seeing a surge in AI tools designed to close the gap. AI chatbots are now widely used to deliver mental health support, often acting as the first point of contact. While the appeal of immediate help, anytime, anywhere is clear, so are the risks.
Mental health care is not just about information exchange—it’s about human connection. The trust built between clinician and patient, the ability to read between the lines, the subtle shifts in body language or voice that indicate something is wrong—these are not things AI can replicate.
Susie Alegre, a lawyer and author, emphasised at the Hay Festival recently that therapy involves more than validation—it requires constructive pushback to support healthy human development. She expressed concern over AI being used as a social replacement for isolated teenagers, warning that it may disrupt their ability to form healthy human relationships.
AI as a support tool
That is not to say AI has no place in mental health services. Quite the opposite. Used thoughtfully, AI can help clinicians deliver better care by removing the friction that too often gets in the way. But we must be clear: AI should support clinicians, not replace them. In doing so, it can address real and practical challenges that clinicians face every day.
AI tools in mental health should be built around this very principle, working quietly in the background, transcribing and structuring medical notes in real time. This means therapists no longer need to spend hours after appointments writing care plans, safeguarding notes, or audit reports. Ultimately, when clinicians are freed from administrative overload, they have more time and energy for the people who need them most.
In the UK, the NHS has incorporated AI assistants like Limbic for preliminary mental health assessments. Limbic is the most widely used mental health triage chatbot within the NHS and is available in 17 regions in the UK. It’s currently accessible to 9 million people around the UK and has released over 20,000 clinical hours for IAPT services so far.
Additionally, AI medical scribes like Tandem Health are being adopted to automate clinical documentation, reducing the administrative burden on clinicians. These AI medical scribes transcribe visits and generate notes, allowing clinicians to focus more on patient care.
Importantly, AI should never make clinical decisions. It should not attempt to diagnose, treat, or intervene without human oversight. Our technology doesn’t pretend to be a therapist—because it isn’t. What it does is enhance the clinician’s ability to be fully present in the room, without being distracted by data entry or administrative checklists.
Substituting human expertise with automation won’t fix the roots of the crisis. It risks deepening inequalities, especially when AI models are trained on data that doesn’t reflect the full diversity of the populations they serve. It risks creating systems that are fast, but not safe. Efficient, but not empathetic.
Regulation and ethical considerations
Gaia Marcus, director of the Ada Lovelace Institute, advocates for stronger regulation of AI to ensure equitable and safe technology deployment that aligns with public expectations.
The UK think tank head highlights concerns over the concentration of AI power among a few large companies and stresses the importance of understanding the socio-technical implications of AI.
Survey data indicates increasing UK public demand for AI regulation, with 72% feeling more comfortable with laws in place, and 88% supporting government intervention to prevent harm post-deployment.
However, AI that automates the paperwork enables clinicians to spend more time face-to-face with those who need their care. The result? Better outcomes, lower burnout, and a healthcare system that feels a little more human again.
As AI continues to reshape our world, we must choose carefully how we apply it—especially in something as delicate and essential as mental health. There’s real potential to do good. But only if we remember that compassion isn’t codable.