A welcome conversation surrounding mental health has arisen but as more people make the decision to reach out, too few find a supportive hand.
Not a week passes without a report on Ireland's mental health system, where lengthy waiting lists, staff shortages and inadequate facilities are the rule rather than the exception. Minister of State with special responsibility for mental health Jim Daly recently announced plans to pilot mental health "web therapy"; signalling a growing recognition of the need for novel approaches.
The capabilities of technology in the mental health sphere continue to flourish and developing therapeutic applications based upon systems driven by artificial intelligence (AI), particularly chatbots, is one arena that’s rapidly expanding. Yet, if you needed to open up, would you reach out to a robot?
Bot benefits
While not specifically focused on AI, a study from the Applied Research for Connected Health (Arch) centre at UCD shows 94 per cent of Irish adults surveyed would be willing to engage with connected mental health technology.
Study co-author Dr Louise Rooney, a postdoctoral research fellow at Arch, says AI-based systems with a research and a patient-centred focus could be beneficial.
“I don’t think AI is the answer to everything or that it could fully replace therapy intervention but I think there’s a place for it at every juncture through the process of treatment; from checking in, to remote monitoring, even down to people getting information,” she says.
The latest Mental Health Commission report shows waiting times for child and adolescent mental health services can reach 15 months. Rooney believes AI-based therapy could be particularly useful for young people who "respond very well to connected mental health technology". The anonymity of such platforms could also break down barriers for men, who are less likely to seek help than women.
Prof Paul Walsh from Cork Institute of Technology's department of computer science feels that AI-driven tools can "improve the accessibility to mental health services" but won't fully replace human therapy.
“For those who are vulnerable and need help late at night, there’s evidence to show [therapy chatbots using AI and NLP] can be an effective way of calming people,” says Walsh, who is currently researching how to build software and machine learning systems for people with cognitive disorders. “If someone’s worried or stressed and needs immediate feedback, it’s possible to give real-time response and support without visiting a therapist.”
Professor of psychiatry at Trinity College Dr Brendan Kelly says AI-based platforms such as chatbots can help people to take control of their wellbeing in a positive manner.
"They can help people to take the first step into an arena that may be scary for them but I feel there will come a point that this is combined with, or replaced by, a real therapist," adds the consultant psychiatrist based at Tallaght Hospital.
Privacy concerns
Using AI-driven mental health therapy doesn’t come without concerns, one being privacy.
“Clearly it’s a very important issue and people shouldn’t use something that compromises their privacy but it’s not a deal-breaker,” says Kelly. “There are ways to ensure privacy which must be done but [fears and challenges] shouldn’t sink the boat.”
Being completely transparent with users about data collection and storage is key, Rooney adds.
Whether AI can determine someone’s ability to consent to therapy is another potential caveat raised by Rooney. However, she feels that forming “watertight legislation” for this technology and ensuring it’s backed by research can help to overcome this and other potential pitfalls.
While most current tools in this field focus on mental wellbeing and not severe problems, Walsh raises the potential of false negatives should AI decide somebody has a chronic illness. To avoid this, it’s important to keep a human in the loop.
“Many machine-learning systems are really hard to analyse to see how they make these judgements,” he adds. “We’re working on ways to try to make it more amenable to inspection.”
As potentially anybody can engineer a system, Walsh recommends avoiding anything without a “vast paper trail” of evidence.
“These will have to go through rigorous clinical trials,” he says. “We need policing and enforcement for anything making medical claims.”
Humans could become attached to a therapy chatbot, as was the case with Eliza, a chatbot developed at Massachusetts Institute of Technology in the 1960s. However, Walsh doubts they will ever be as addictive or as great a threat as things like online gambling.
While the sentiment that AI-based therapy will assist rather than replace human therapy is quite universal, so is the view it can have a great impact.
“Achieving optimum mental health involves being open to all different ingredients, mixing it up and making a cake. AI can be part of that,” says Rooney.
If well regulated, Walsh says AI can augment humans in terms of treating people.
“I’m hopeful that benefits would be accentuated and the negatives or risks could be managed,” says Kelly. “The fact that it’s difficult and complex doesn’t mean we should shy away, just that we must think how best to capture the benefits of this technology.”
Brains behind the bots
Stanford psychologist and UCD PhD graduate Dr Alison Darcy is the brains behind Woebot: a chatbot combining artificial intelligence and cognitive behavioural therapy for mental health management.
“The goal is to make mental health radically accessible. Accessibility goes beyond the regular logistical things like trying to get an appointment,” explains the Dublin native, who conducted a randomised control trial of Woebot before launching. “It also includes things like whether it can be meaningfully integrated into everyday life.”
Darcy is clear that Woebot isn’t a replacement for human therapy, nor will he attempt to diagnose. In the interest of privacy, all data collected is treated as if users are in a clinical study.
Not intended for severe mental illness, Woebot is clear about what he can do. If he detects someone in crisis, Woebot declares the situation is beyond his reach and provides helplines and a link to a clinically-proven suicide-prevention app.
Originally from Wexford, Máirín Reid has also harnessed the capabilities of AI in the mental health sphere through Cogniant. Founded in Singapore with business partner Neeraj Kothari, it links existing clinicians and patients to allow for non-intrusive patient monitoring between sessions.
It’s currently being utilised by public health providers in Singapore with the aim of preventing relapses and aiding efficiency for human therapists. As Cogniant is recommended to users by human therapists, decisions on consent capabilities are formed by humans.
“Our on-boarding process is very clinically-driven,” says Reid. “We’re not there to replace, but to complement.”
While not intended for high-risk patients, Cogniant has an escalation process that connects any highly-distressed users to their therapist and provides supports. There’s also a great emphasis on privacy and being transparent from the offset.
“Clinicians are saying it drives efficiency and they can treat patients more effectively. Patients find it’s non-intrusive and not judgmental in any form.”