Thanks for this! It means a lot. It can be a little discouraging to hear criticisms when you've put a lot of thought and time into something that you hope can help people.
But I fully understand and appreciate the comments made by other posters.
I have put it back online with a disclaimer that must be acknowledged, and pointed out this is a research app not a substitute for therapy.
I hope that does enough to mitigate some of the risks, whilst allowing me to continue to work at making this a safer and more valuable resource. Whenever that may be, or not :)
I've taken it down. It was meant as a quick demo of how one might leverage GPT4 in this area but there's a risk posting here people might think it's a serious product.
From your comment however, you don't rule out applying AI in this field entirely?
Not at all! In fact, after my own personal tinkering I suspect GPT-4 is often better at diagnosis for both mental and physical health issues than the average doctor for common ailments.
The obvious issue is that it can confidently output the wrong answer. This will be less of a problem though as model accuracy continues to improve.
Really the bigger problem are the ones that doctors and therapists already have: patients are not always reliable narrators and may mis-describe symptoms, insert their own prejudices, and outright conceal important information. In particular, many patients may not even possess the proper writing and reading skills necessary to properly respond to even the most accurate of prompts.
This is now the muddy domain of "sorry, you just need a human available". There's too much risk online, too many edge cases to accomodate, and since the tech itself actually provides a _convenience_ over doctor visits; you could be causing more harm than you might think if people decide to ignore their doctors for the "cheaper" option. This is particularly relevant to me as an American, where health care is a mess, and mental health care is far worse.
A better approach might be to assume that this type of tool is available for patients who are currently receiving some kind of treatment; whether that is a doctor's appointment or in-patient care at a behavioral health facility. There's probably a chance to reduce friction and maybe improve patient outcomes if such a tool could be provided as an early survey, perhaps. Or - as you have done here, a way to teach some coping skills that are highly individualized to each person.
Really though yeah, I think a qualified professional should be in the room and making sure things go well.
Thanks so much for your thoughts. I tried my app with a friend and it took a good amount of digging deeper to actually get to their root thoughts about their difficult event.
So to be successful with such a tool, a good level of self-awareness is required.
Yes a tool as an adjunct to regular therapy is a good opportunity. I discussed this actually with my own therapist and he thought it was a good idea, with the NHS being so overwhelmed here in the UK to be able to have a first-line resource-cheap option is appealing.
I suppose the thing to figure out is how this is best done in a safe and useful manner.
No problem! It's an area I'm recently quite interested in but don't really have the background necessary to approach it.
I am also quite biased against any system that reduces the accountability of the care provider - and so much of modern application development is about being able to scale essentially everything _but_ accountability.
I like the idea of "ThoughtCoach" because you are framing it as empowering the user to improve their own abilities to manage their mental health. In this regard, it's kind of in the genre of mindfulness meditation (and the various apps/integrations that have shown up to help people with that). You could go with a gamified option like Noom does.
This requires you to scope out a progression for the user to go through - and to be informed (at least hopefully) by modern psychology and proven to work statistically before being implemented. This also requires you to significantly narrow the LLM's possible outputs/modes at each level - which alleviates many issues with overconfident outputs and unreliable patients (you can even force the user to choose from a list of words, rather than type anything at all).
I haven't checked this space out enough yet but it's rapidly evolving. A lot of chatGPT-based start-ups are in this learning-with-a-smart-tutor-available regime. I think it could be very powerful but a lot of clever engineering is still required. On the other hand, maybe the limits there prevent it from being nearly as useful as what you're imagining.
Food for thought.
edit: I guess my final thought is that, from personal experience, a good part of why therapy can be successful is that some amount of positive human interaction alone can be enough to improve one's general outlook. The context of "using an app" requires initiative and perseverance. The comfort from consistent human contact can inspire that but apps just feel like "thumbing through your phone". Reducing screentime in general is good for mental health. Thus, the app shouldn't be gamified really; because the only targets you can set in an app are LLM-based and will require the user to type or tap. Anything other than that is just what's already out there - Apple's Screen Time, step tracking, calorie counters, mindfulness guiding. Adding an LLM doesn't obviously help there.
I have taken it down, thanks for the advice. Definitely not worth taking risks. As someone who has struggled with mental health this is something that I built mainly for myself. I will keep it as a personal assistant.
It can be hard to find a therapist who you gel with. I've been through a few in my time. The real work is done by oneself ultimately. Be it with CBT homework or mindfulness between sessions. Obviously it's very individualised though.
Robert Burns (of CBT fame) is currently involved in an app which incorporates AI so I'm interested to see how they've dealt with the sensitive issues.
I'm going to be curious to see how it plays out with trained and ethically aware psychologists. However, just because they have their PhD or MD or DO doesn't mean they're not charlatans or that they're definitely good people.
Personally, I had some courses on computer architectures, and those covered the theory of how CPUs are constructed, how they run programs, etc. I decided to write an NES emulator. So:
- Find Wikipedia articles, and learn that it used a variant of the MOS Technology 6502, which was used in a lot of computers in the 80s.
- Find some digitized assembly programming manuals from the time (I think the one I used was distributed with the Commodore 64, and ended up having several typos introduced by OCR).
- Write a tool to recognize, decode, and print out an operation when you feed it a little data
- You basically need to set up a loop of fetching instructions, interpreting them, then doing what they say. An actual CPU runs in a similar loop, and it generally doesn't stop until power is removed,
I think that after the classes I took, I read a lot of what other emulator writers said. This article is a basic look at the structure of an emulator, the theory behind them, and some different designs: http://fms.komkon.org/EMUL8/HOWTO.html
One of my friends at uni wrote a Z80 emulator/assembler in PDP-10 assembler as a hobby project - and he was studying chemistry, not CS...
I don't understand why a basic understanding of CPU architectures isn't a CS fundamental everywhere.
Even if you have no interest in emulating a CPU or an OS, you really do need to know what registers are, how caches work, what interrupts do, and how basic IO happens.
At the very least it's a practical demonstration of one particular kind of VM, and - if you want to - you can generalise from that to VMs of your own design.
For web apps, not understanding these things can get expensive. Cycles, even cloud cycles, aren't free, and if you take zero interest in optimisation and efficiency you're literally throwing money away.
Agreed. It's important to understand how the hardware works, at least at the theoretical level. If you don't understand what the machine is doing, it's hard to say that you really understand how your program works.