Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Facebook brings suicide prevention tools to Live and Messenger (techcrunch.com)
81 points by tooba on March 2, 2017 | hide | past | favorite | 52 comments


To those commenting that this somehow absolves friends from the responsibility of connection (and that's a bad thing), you're missing the point:

- Not everyone HAS friends who are willing or able to connect with them sufficiently. Saying "yeah, but they should" does NOT help the actual situation. These resources might.

- It is hard to know what to do with a (near) suicidal friend. Even if you truly wish to help, "connecting" or reaching out to them often doesn't help sufficiently. It can also be extremely draining on the helpful friend, who thinks they have a responsibility as a friend and good person to be there for someone in need. For normal gloom and doldrums, sure, that's what friends are for. For severe depression and suicidal thoughts, a friend can very quickly get in over their head, becoming the sole emotional anchor or lifeline for the depressed person. This can also lead to the helpful person's becoming burnt out, depressed, and upset at their perceived failure in helping this person they care about.

The more technology can do in identifying this negative emotional vortex and prompting people with the appropriate resources, the better.

If you're commenting about a friend's responsibility, have you ever been severely (suicidally) depressed, or the prime friend of someone who is? I have. I support Facebook's actions in this wholeheartedly.


It's nice to see them trying to do something about this. But I'd hold off on judging the merits of this approach until some reliable data comes in with regard to its efficacy.


Reminiscent of the automated religious booths in THX 1138: http://www.imdb.com/title/tt0066434/ https://en.wikipedia.org/wiki/THX_1138


That was my first thought too. It's a very small step from

   It sounds like you might hurt yourself, sending help.
to

   It sounds like you don't agree with the Party, comrade. We're 
   sending the happiness team to collect your family for retraining.


That seems like a pretty big step to me. Would you oppose 911 systems on the same grounds?

It's such a small step from: Your house is on fire, let's do something about that.

to

Forced fire safety re-educatation camps.


Both of the increments in my example depend on sentiment analysis of private communications. That tech opens a whole brave new world.


Both of my examples involve your neighbor using telecommunications to report you to the government.

That a technology could be used to do bad things, is not a very good reason not to use it to do good things.

Would not providing automated suicide support prevent malicious use of sentiment analysis? Just because they use the same underlying technology doesn't mean one use causes the other use.


Yeah. I guess it's more a social issue: if a tech can get abused to support the state's goals--we are quickly finding--that it will get abused.

Quick example: Alexa has already gotten its first subpoena for a presumed private voice conversation in someone's home.


It's impossible to argue preventing suicides is a bad thing. But it is inseparable from the fact facebook is now actively pursuing their ability to use the platform for social control.

I'm willing to grant them the benefit of the doubt that they just sort of bumbled into an interaction loop with downsides that might make facebook use a net-negative for the mental state of some users. They were trying to provide communication tools and it evolved into something that (at least correlates with) a decline in well being [0][1].

As more and more companies doing "social" cultivate userbases larger than they can moderate start feeling justified in shaping their users' behavior en mass, I worry that we're slowly stumbling towards human-scale skinner boxes that no-one fully understands or controls.

[0] http://journals.plos.org/plosone/article?id=10.1371/journal....

[1] jimmies' comment https://news.ycombinator.com/item?id=13772729 is a fairly typical example of someone figuring out facebook use is harming their psyche.


While I don't use Facebook, I've been on several other platforms and seen either threats, or actual suicides, by multiple contacts amongst them.

Facebook has, by some measures, 1.8 billion active users, roughly 1/4 of the global population. If there is a suicide every 40 seconds, there's a good chance that there's one every 2-3 minutes among Facebook's users. Strictly by standard mortality tables, there are tens of thousands of deaths amongst FB users per day.

In the case of one friend -- deeply troubled, I knew -- there were two periods in which she'd threatened suicide. The first time, I managed to get through to local authorities in her area, though that was suprisingly difficult itself. The second time, it was simply too late. For another friend, I'd had no idea until after it was all over.

The feeling of absolute impotence one can feel behind a screen, and realising the impractiality of a large systems provider of being able to reach out directly is numbing. Heroic interventions, particularly via online systems, are unlikely at best.

Far more critical would be to strengthen mental health, physical health, and social welfare (in the broadest sense) systems. There are so many people, in all parts of the world and all stations of life, in precarious straits, and often only the flimsiest of support, if that, available.

I appreciate Facebook's efforts, but a committment to substantive, early, and ongoing support strikes me as vastly more meaningful and effective. This initiative could have some impact, but without a deeper committment, strongly risks being seen as cosmetic and ultimately self-serving.

http://expandedramblings.com/index.php/resource-how-many-peo...

http://www.who.int/gho/mortality_burden_disease/mortality_ad...


To a depressing degree, most of our society's efforts against suicide work like this. The issue gets ignored altogether until someone is perceived to be in immediate danger, and then they either get support or (all too often) a three day psychiatric hold.

This process is terrible on so many levels. The psych hold in particular is both excessive (f someone isn't suicidal, it's a pointless kidnapping) and insufficient. (If someone just attempted suicide, they're at low risk right after. If someone gets antidepressants, three days isn't long enough for them to kick in.)

I suppose Facebook isn't in a position to help earlier on (at least, not without even more privacy invasion), but we definitely don't intervene in this issue at sensible times.


The kinds of interventions I'm looking for would be deep and social (see my other reply in this thread). The thing about healthcare, physical or mental, is that early and sustained efforts pay of vastly more greatly than heroic acute efforts.

I'm increasingly convinced that the traditional Chinese concept of paying the doctor when you are well is an approach well worth considering, and possibly adopting.

It's also quite helpful to realise that not all ills can be healed. Almost all can be made far more tolerable, though, at the very least.


I largely agree with your comment and the sentiment behind it, especially when it comes to individuals, but this:

> I appreciate Facebook's efforts, but a committment to substantive, early, and ongoing support strikes me as vastly more meaningful and effective.

troubles me. I can't imagine a way that Facebook would be able to make a substantive, early, and ongoing effort to strengthen mental health and social welfare systems within its own system without being overly invasive.

Could you give me an example of a deeper commitment Facebook could make?


Facebook, of and by itself, wouldn't.

But it would campaign for such services and capabilities within existing (or new) social institutions.

Universal single-payer healthcare, as a right, would be a tremendous step in that direction. And Facebook's capacity to sway political events is proven. Put that to good use.

Generally promoting the concepts of social responsibility, the common weal, full and fair corporate taxation, equality of opportunity, and more, are additional components.


I see! I was trying to think from a technical perspective of them taking advantage of the amount of data they have and doing something -- what you're describing makes a lot more sense. Thank you!


I don't know about the claim "Facebook is in a unique position to help prevent people from doing harm to themselves." Oh, Big Brother is our brother now?

If harm means physical harm, maybe -- or maybe not: apparently many people definitely thinks that facebook live is a popularity contest and start streaming their suicide on it [OP, and Google - 0]. Here, to understand why fb might be the reason people did that in the first place, you should ask if live streaming functionality is all that people wanted, then why streaming suicide attempts on youtube wasn't a (or, as big of a) problem?

But I think on a more intricate level, harm also means slowly sinking, and being depressed, then to me, Facebook is a drug that does exactly that.

There are studies which concluded that people are actually happier without fb [1]. Personally, I found that to be true: I feel depressed browsing the meaningless statuses of people bragging about trivial things or sharing political news. I find fb isn't even a platform that I can share "what's on my mind" (like it always prompts) anymore. Everything is tied to my real name with no other option. I can't share the stories at work because co-workers on facebook will see. I can't share anything about my relationship because our mutual friends will see, and my girlfriend gets pissed off. I can't share my good news because people will think I am a dick who want to brag. I can't share my bad news because people will get worried more than me. I can't share hobbies because my friends don't have the same hobby and they don't care, so no reaction, so I get depressed because no one cares.

So all I can share is shitty vacation pictures with me smiling like an idiot, and inconsequential news. Those are the news that pleases everyone: The lowest denominator of me and everyone on fb. Fuck that. Also, I found out that my behavior on fb is extremely similar to an addiction behavior. That includes finding it not useful but keep coming back, repeatedly checking it for new stuff, coming back after deactivating.

I used to think Facebook messenger was essential. Not until I found myself not reading the messages and don't care what people say in 1000s of groups I am in. I see that in my friends too, they left group without saying a word, it means FU, don't add me to stupid group chats again. How about events? People start not responding on event invites as well.

So I decided enough was enough -- along with numerous ethical problems of facebook [2], fb does more harm than good to me. In the last couple of weeks, I have deleted my fb account. This time, I don't announce to people that I will be gone, I just silently deleted it. I had enough, I don't care who will miss me, thank you facebook. It is a lie: most of them don't, and if they or I do, we would already have each other's contact information.

Beginning of 2017, I also swapped my top-of-the-line smartphone with a dumb phone, with little to no distraction. Data caps is no longer a problem, privacy is no longer a question (I just assume everyone hears), features is no longer a problem, losing of sleep is no longer a problem, apps definitely is no longer a problem, and I no longer question or have any problem when someone defriends me. I check my emails when I want. I come to people. I no longer have the urge to check my phone for facebook feed in the middle of the night. I lug around a big ass mirrorless camera that takes pictures not for likes and navigate by my instincts. I feel great. I feel productive. I feel creative. I feel freedom. I started commenting on websites that people don't know and care who the fuck I am. But they have the same hobbies and interests, so I feel connected, I feel togetherness.

There is definitely more when I have less of facebook.

0: https://www.google.com/search?q=girl+live+streams+sucide+on+...

1: http://journals.plos.org/plosone/article?id=10.1371/journal....

2: https://stallman.org/facebook.html


I've found Facebook almost comically easy to avoid since day one. The only thing that never changes is how intense the desire of people on Facebook to have me join them is, but their arguments never improve. The only friends I know still on FB are the ones who only ever used it to keep in touch with people halfway around the world, and nothing else. Even then, most of them have migrated to other services.

Facebook is a crazy invasion of privacy, but you literally have to invite them in.


> Facebook is a crazy invasion of privacy, but you literally have to invite them in.

Yep. I have my Facebook privacy settings cranked up to max, but I'm honestly not sure why I bother. My actual policy is "this is a public space, only put things here I'm happy to publicize", so there's really no privacy to invade.


That's the sane way to do it, definitely.


Facebook has both good and bad parts; unfortunately, the good part of FB (which helps people connect/stay in touch with each other) doesn't make any money, on the other hand, the bad part that incites jealousy, consumerism and depression is very lucrative.

There is no better company to illustrate the problems of our current economic system than Facebook.


> Also, I found out that my behavior on fb is extremely similar to an addiction behavior. That includes finding it not useful but keep coming back, repeatedly checking it for new stuff, coming back after deactivating.

It's also just like a drug in that it's at its most exciting and positive when you start using it. Initially everything is coming from everyone around when you sign up, and it's a few weeks to months before you're "settled in" and it becomes what you're describing. Glad I've been gone for several years now. I see my fiancee use it occasionally and it looks even more awful than I remember, constant scrolling and videos autoplaying. No thanks.


I can understand the 'drug' effect, but I was interested that I never had anything like withdrawal symptoms when I ditched the thing. I found FB interesting and fun, then I found it less fun but checked it out of habit, then I realized that was silly and abandoned it outside of direct interactions.

I think maybe I got lucky and synced up with the issues you describe. Non-chronological feeds, embedded ads, and autoplay videos are simply unpleasant, so if I was addicted perhaps they took away the stuff that would have offered a 'hit'.

In any event, I know an awful lot of people who report the same "I used to use FB" outlook. New users are born every minute I guess, but it's enough people to make me wonder about FB's sustainability.


"why streaming suicide attempts on youtube wasn't a (or, as big of a) problem?"

you would have less of captive audience on youtube. Do this on facebook and it just gets thrown in front of an audience of people you know.


> In Facebook CEO Mark Zuckerberg’s recent manifesto, he wrote about how Facebook is in a unique position to help prevent people from doing harm to themselves.

This makes you wonder how often it happens that a post on facebook (say, school kids bullying a class mate) is the final straw for someone who is already mentally suffering.

In other words, FB is probably quite responsible for many suicide attempts by people who are at risk to committing suicide.


I hate Facebook, but being hasty to place blame and responsibility on them after a suicide short sighted. Unless it was related to the emotional manipulation experiment they did a while back.

https://www.forbes.com/sites/gregorymcneal/2014/06/28/facebo...


"Responsibility" may be too strong a word but I guess it is fair to say that technically they've played a role in many cases. And because of that it rubs me the wrong way when they say they are in a unique position to "help" people. It's rather an attempt at saving face.


It was AIM before Facebook, and playgrounds before AIM. Shitty people are quite responsible for many suicide attempts by people who are prone to committing suicide.


That's a little reductive. Another communications medium would take Facebook's place if it weren't the platform for this bullying.


No doubt. I just find Mr. Zuckerberg's wording a bit odd. Clearly they fear that someone might blame them someday if they fail to take action. Instead of saying "Look, we have a problem and we're working on it" he goes "Look FB is great, we help people!"

Of course, this is typical PR speak but it leaves a sour taste when it comes to such a serious topic, imho.


What a sticky situation. I applaud the groups concerned with well-being for being up for partnership with Facebook. In reality, I also want to see the tools as on the positive side, in how they can do some good. Deploying such tools is a risk on Facebook's part, and I think it's the good type to take. Harm prevention is probably a consensus point to work on; I see it as a good.

My imagination did give me a little pause though, that if somebody was going through some kind of paranoia state and suddenly the Facebook AI scoots them over to one of the crisis providers - you know, kind of like technology being indistinguishable from magic/witchcraft when using AI - that could be a compounded situation. Granted, a drastic hypothetical, but perhaps not an entirely useless one.


This is probably a moderately useful suicide and self harm prevention measure.

We know that of s=deaths by suicide in people under 18 1 in 7 posted messages to social media before they died: https://twitter.com/ProfLAppleby/status/837053501243023364

We have small amounts of weak evidence, mostly from interviews of people who survived suicide attempts and people who self harm who say that this kind of intervention is helpful.

See also signs in multistory carparks and different packaging (and reduced pack sizes) for paracetamol.

I do a lot of searching for suicide related stuff, and I already see quite a lot of similar advice. I guess I'm about to see a lot more of it.

https://twitter.com/actioncookbook/status/834439563032555521


But I think the thing that makes it impactful is that in these kinds of interventions, it comes from one of your friends.

In the intervention that Facebook proposes, the victim only sees a popup from Facebook, not a friend. The person who generated that popup isn't even named. It's very impersonal.

If I were suicidal, the only thing this popup would tell me is that I made one of my viewers uncomfortable, and that they weren't willing to connect in person. I imagine that would only serve to amplify my shame and make me feel even worse.


I suddenly think of the fake bomb alert last year, http://www.theverge.com/2016/12/27/14088982/fake-news-safety.... I can't tell this is going to have false negative or positive situation. It might be worse. How do we prevent the former? I don't think there is a way. It looks like this - some kids keep calling 911 for fun. That used to be a thing in my country.


And false positives for suicide are comparably bad to ones for bombs. No SWAT teams perhaps, but it can easily get someone held on a closed psych ward for a week (with lasting legal consequences for them). Someone is going to sue the hell out of FB when they get held on a false positive.


This article leaves me with a very sour taste in my mouth. I echo the sentiment that others have expressed about pawning off the work of connecting with the person in need to be a bad thing. I also don't think that just because Facebook is in a position that they can do something about it that they should. It seems like a far too personal matter for them to be sticking their hands into.


Having participated in a couple teen suicide autopsies, I submit that, at least in the meetings I was in, there was broad support for recruiting social networks to help fight both cyber-bullying and detect suicidal tendencies. If anything, let's screw up in the other direction for a while and see how the two approaches compare.


This really strikes me as wrong. I can't quite put my finger on why.

It's like if I'm watching my friend go through a hard time, this removes my responsibility to connect with them? I can just ask robots at Facebook to do the dirty emotional work for me. How messed up is that?

We're outsourcing emotional labor to robots now? like if I get uncomfortable supporting my friend, i can click a button to have robots do it instead?

An actual handcrafted message from someone saying "hey, it sounds like you're hurting. what do you need?" would mean far more to me than a popup from Facebook that says "An Unnamed Friend has Activated Crisis Response Protocol! Deploying Support Resources ..."


You're describing the world in terms of "should/ought" rather than "is". Yes, it would be nice to live in a world where every person has friends who are capable of connecting with and supporting them, but that is not the present world. The point of tech is scalability - you can implement something which can rapidly help _everyone_, especially those for whom other systems have failed.

Additionally, you're ignoring how hard it can be to adequately support a suicidal friend. Often times people want to help, but feel out of their depth and don't know how, and many people are unaware of the availability of resources such as the Crisis Text Line or Suicide Prevention hotline (which is what FB connects you with).

As for the "responsibility to connect with them", at a certain point your responsibility as a friend is to get someone professional help and intervention. Trying to keep a depressed friend afloat as a layperson is enormously taxing, and often unsuccessful. Yet people think they have a "responsibility" to keep trying, as a good friend, and can easily burn themselves out, with disastrous consequences for both themselves and their depressed friend.

I welcome any extra support technology and AI can provide, at least in terms of connecting people with the correct resources.


How long before facebook gets sued over a suicide it failed to prevent?


Here's an IP law professor's take [0] in response to two suicides streamed on FB Live in January.

tl;dr: "I don't believe current law requires FB to take any additional steps."

[0] http://www.rawstory.com/2017/02/can-facebook-be-sued-for-liv...


Came here to say this. They better be very careful going down this road. If they have a tool for this purpose, and somebody can claim it is debatable help, my guess is they can be in trouble.

I built an old-style chatbot (this Ask Eliza) and put in a slight recognition pattern if certain key phrases are entered that takes the user down a certain decision-tree of question/responses. It's loosely based on CBT only as a choice of reasonable conversation/interaction. I would never claim it was a suicide prevention tool or a depression therapy tool, even if it was pretty good at it.


How long before Facebook starts grading people and choosing whom to prevent from suicide and whom to push towards it?


They could always sell that data to companies that are interested in not employing persons with mental health issues.


Something like Cambridge Analytica could use it to do terrible things.


I can think of quite a few awful things data like this could be very useful for. I feel just a little gross that I had these ideas.


Thought-crimes coming soon to a facebook near you.


[flagged]


This is a pretty cynical snark and doesn't contribute anything to the discussion.


Their revenue is increasing while saving lives? How dare they.


Did Facebook even stop to think and realize they are the reason those persons are depressed and wish to die?


Fix the inequity in society that is the cause of most depression? Nahhh lets just virtue signal.


Specifically what role do you think Facebook should play in reducing inequality?


Reorganize as a worker cooperative.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: