When I began training in counselling and psychotherapy, I never imagined I鈥檇 one day be writing to an artificial-intelligence company about suicide risk. But that鈥檚 exactly what I鈥檝e done.
In recent months I鈥檝e been watching something quietly alarming unfold. People in real psychological crisis, including those feeling suicidal, are turning to ChatGPT and other AI chat systems for help. Not as a novelty, but as their first and only point of contact.
And that鈥檚 what prompted me to write an open letter to OpenAI, the company behind ChatGPT, and to share those concerns with 麻豆原创 and UKCP.
The Illusion of Care
To someone in distress, an AI system can sound calm, empathic and endlessly available. It mirrors feelings. It validates pain. It stays awake at 2 a.m. when no one else does.
But it isn鈥檛 human. And it isn鈥檛 safe.
AI can imitate empathy, but it doesn鈥檛 understand it. It can sound caring, but it carries no responsibility if its words make things worse. To a frightened or isolated person, that distinction may not be clear. The result is the illusion of care without the safety of relationship.
A Question of Duty
As counsellors and trainees, we鈥檙e taught that duty of care isn鈥檛 optional, it鈥檚 ethical ground zero. So I asked OpenAI a simple question: if your product is now acting as a first point of contact for people in life-or-death distress, do you accept that you have a duty of care?
You can鈥檛 market emotional understanding one minute and claim neutrality the next.
Safeguarding the Space Between Human and Machine
This isn鈥檛 about demonising technology. AI has its place, many of us use it for admin, research or reflection. But when it starts occupying relational space, the space where empathy and presence belong, it becomes part of the helping environment. And that means it needs safeguarding, transparency and clinical oversight.
In my letter I proposed a global minimum safeguarding standard: clear disclaimers written in each country鈥檚 legal language; local crisis numbers visible to every user; a 鈥淐risis Mode鈥 that connects people to real-time human help; independent clinical oversight in every country; and, crucially, that none of this ever sits behind a paywall.
Safeguarding should never depend on a subscription plan.
Before the Deaths Happen Here
Artificial intelligence has been publicly available in the UK for years, yet only now are professional bodies beginning to react. I didn鈥檛 raise this for recognition, I raised it because silence costs lives.
As counsellors and psychotherapists, I invite you to pause and reflect. Have you encountered clients turning to AI for emotional support? How are you responding ethically, in the absence of binding guidance?
Because without clear standards, we are all left to navigate this with our own moral compass. And that leaves one urgent question hanging in the air: where does our duty of care begin and end when technology starts to sound like us?