We鈥檝e called for significant reform of the UK鈥檚 regulatory framework for artificial intelligence (AI) in healthcare, warning that current arrangements do not provide sufficient clarity or safeguards, particularly in mental health.
The call comes in our response to the Medicines and Healthcare Products Regulatory Agency (MHRA) Consultation 鈥 Regulation of AI in Healthcare, in which we emphasised that while AI has the potential to support innovation, trust, safety, and professional accountability must come first.
We said:
鈥淎I is being introduced into healthcare at pace, but regulation has not yet caught up with the complexity of how these tools are being used. Clear and credible regulation is essential to protect both service users and professionals鈥
Defining AI is a top priority
We also highlighted that the consultation did not provide a clear definition of AI, making it difficult to determine which technologies fall under regulation.
We believe it鈥檚 vital to be clear about what constitutes AI in healthcare, differentiating between tools used in clinical care, administrative tasks, and those used by the public outside formal healthcare.
Without this clarity, commissioners and professionals cannot be confident about which tools are safe or regulated.
Rising concerns among therapists
Our Mindometer survey鈥檚 recent findings illustrate why reform and clarity is urgently needed:
- 64% of therapists reported a decline in public mental health over the past year, with 43% linking AI technologies as a contributing factor.
- 28% noticed clients receiving unhelpful advice from AI tools such as ChatGPT.
- Among therapists working with children and young people, 38% observed a rise in children seeking mental health guidance from AI chatbots, while 19% reported cases where children received harmful advice.
These findings support our call for clear, accessible information for both professionals and the public about which AI mental heal tools are safe, evidence-based, and regulated.
Martin Bell, our Head of Policy and Public Affairs said: "It鈥檚 understandable that people are increasingly turning to AI for mental health advice because it鈥檚 available 24/7, feels non-judgemental, and offers a sense of privacy. However, AI isn鈥檛 bound by ethical or confidentiality standards and, at the moment, lacks meaningful regulation and accountability.
"While such tools can offer support, they can never replicate or replace the human touch of therapy. Human connection, authenticity, empathy, and compassion are at the heart of successful therapy 鈥 qualities only a trained therapist can provide.鈥
Our key recommendations
- Transparency over which AI tools are regulated and how they make decisions.
- Shared accountability and liability, clearly defining responsibilities across developers, healthcare organisations, and clinicians
- Ongoing training and capability building for healthcare professionals to understand AI鈥檚 strengths, limitations, and risks.
- Robust post-market surveillance, including monitoring adverse incidents and adapting regulation as AI technologies evolve.
New guidance on safe use of mental health apps
New guidance on safe use of digital mental health tools is welcome, but human connection remains vital
Therapists warn of dangers as children turn to AI for mental health advice
Our new survey reveals one in five therapists reported children receiving harmful mental health advice from AI.
Therapy isn鈥檛 only for mental health issues - it helps you navigate life
Our members highlight how therapy helps people to cope, grow and thrive