Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Most people who have worked with a professional therapist understand intuitively why the only helpful feedback from an LLM to someone who needs professional help is: get professional help.

I'm not a therapist, but as I understand it most therapy isn't about suicide, and doesn't carry suicide risk. Most therapy is talking through problems, and helping the patient rewrite old memories and old beliefs using more helpful cognitive frames. (Well, arguably most clinical work is convincing people that it'll be ok to talk about their problems in the first place. Once you're past that point, the rest is easy.)

If its prompted well, ChatGPT can be quite good at all of this. Its helpful having a tool right there, free, and with no limits on conversation length. And some people find it much easier to trust a chatbot with their problems than explain them to a therapist. The chatbot - after all - won't judge them.

My heart goes out to that boy and his family. But we also have no idea how many lives have been saved by chatgpt helping people in need. The number is almost certainly more than 1. Banning chatgpt from having therapy conversations entirely seems way too heavy handed to me.



I feel like this begs another question. If there are proven approaches and well established practices of professionals how good would chatgpt be in that profession? After all chstgpt has a vast knowledge base and probably knows a good amount of textbooks on psychology. Then again actually performing the profession probably takes skil and experience chatgpt can't learn.


I think a well trained LLM could be amazing at being a therapist. But general purpose LLMs like ChatGPT have a problem: They’re trained to be far too user led. They don’t challenge you enough. Or steer conversations appropriately.

I think there’s a huge opportunity if someone could get hold of really top tier therapy conversations and trained a specialised LLM using them. No idea how you’d get those transcripts but that would be a wonderfully valuable thing to make if you could pull it off.


> They’re trained to be far too user led. They don’t challenge you enough.

An anecdote here: I recently had a conversation with Claude that could be considered therapy or at least therapy-adjacent. To Anthropic's credit, Claude challenged me to take action (in the right direction), not just wallow in my regrets. Still, it may be true that general-purpose LLMs don't do this consistently enough.


> No idea how you’d get those transcripts

you wouldn't. what you're describing as a wonderfully valuable thing would be a monstrous violation of patient confidentiality. I actually can't believe you're so positive about this idea I suspect you might be trolling


I'm serious. You would have to do it with the patient's consent of course. And of course anonymize any transcripts you use - changing names and whatnot.

Honestly I suspect many people would be willing to have their therapy sessions used to help others in similar situations.


Knowing the theory is a small part of it. Dealing with irrational patients is the main part. For example, you could go to therapy and be successful. Five years later something could happen and you face a reoccurrence of the issue. It is very difficult to just apply the theory that you already know again. You're probably irrational. A therapist prodding you in the right direction and encouraging you in the right way is just as important as the theory.


it's imperative that we as a society make decisions based on what we know to be true, rather than what some think might be true.


“If it is prompted well”

What the fuck does this even mean? How do you test or ensure it. Because based on the actual outcomes ChatGPT is 0-1 for preventing suicides (going as far as to outright encourage one).


If you're going to make the sample size one, and use the most egregious example, you make pretty much anything that has ever been born or built look terrible. Given there are millions of people using chat, GPT and others for therapy every week, maybe even everyday, citing a record of being 0-1 is pretty ridiculous.

To be clear, I'm not defending this particular case. Chat GPT clearly messed up bad.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: