Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This thread reads like an advertisement for ChatGPT Health.

I came to share a blog post I just posted titled: "ChatGPT Health is a Marketplace, Guess Who is the Product?"

OpenAI is building ChatGPT Health as a healthcare marketplace where providers and insurers can reach users with detailed health profiles, powered by a partner whose primary clients are insurance companies. Despite the privacy reassurances, your health data sits outside HIPAA protection, in the hands of a company facing massive financial pressure to monetize everything it can.

https://consciousdigital.org/chatgpt-health-is-a-marketplace...



> This thread reads like an advertisement for ChatGPT Health.

This thread has a theme I see a lot in ChatGPT users: They're highly skeptical of the answers other people get from ChatGPT, but when they use it for themselves they believe the output is correct and helpful.

I've written before on HN about my friend who decided to take his health into his own hands because he trusted ChatGPT more than his doctors. By the end he was on so many supplements and "protocols" that he was doing enormous damage to his liver and immune system.

The more he conversed with ChatGPT, the better he got at getting it to agree with him. When it started to disagree or advise caution, he'd blame it on overly sensitive guardrails, delete the conversation, and start over with an adjusted prompt. He'd repeat this until he had something to copy and paste to us to "prove" that he was on the right track.

As a broader anecdote, I'm seeing "I thought I had ADHD and ChatGPT agrees!" at an alarming rate in a couple communities I'm in with a lot of younger people. This combined with the TikTok trend of diagnosing everything as a symptom of ADHD is becoming really alarming. In some cohorts, it's a rarity for someone to believe they don't have ADHD. There are also a lot of complaints from people who are angry their GP wouldn't just write a prescription for Adderall and tips for doctor shopping around to find doctors who won't ask too many questions before dispensing prescriptions.


> I'm seeing "I thought I had ADHD and ChatGPT agrees!" at an alarming rate in a couple communities I'm in with a lot of younger people

This may be caused by ChatGPT response patterns but doesn't necessarily mean there is an increase of false (self-)diagnoses. The question is: What is alarming about the increasing rate of diagnoses?

There has been an increase of positive diagnoses over the last decades that have been partially attributed to adult diagnoses that weren't common until (after) the 1990s and the fact that non-male patients often remained undiagnosed because of a stereotypical view on ADHD.

If the diagnosis helps, then it's a good thing! If it turns out that 10% of the population are ADHDers then let's see how we can change our environment that reflects that fact. In many cases, meds aren't needed as much when public spaces provide the necessary facilities to retreat for a few minutes, wear headphones, chew gum or fidget.

The story of your friend sounds very bad and I share your point here, completely. But concerning ADHD, I still don't see what's bad about the current wave of self-diagnoses. If people buy meds illegally, use ChatGPT as a therapist, etc. THAT is a problem. But not identifying with ADHD itself (same for Autism, Depression, Anxiety and so on).

ADHD may or may even be a reinforcing factor for a LLM user to be convinced by the novelty of the tool - but that would have to be empirically evaluated. If it were so, then this could even contribute to a better rate of diagnoses without ChatGPT capabilities in this field contributing much to the effect. Many ADHDers suffer from failing at certain aspects of daily life over and over and advice that helps others only makes them feel worse because it doesn't work for them (e.g. building habits or rewarding oneself for reaching a milestone can be much more difficult for ADHDers than non-ADHDers). I'm just guessing here and this doesn't count for all ADHDers, but: Whenever a new and possibly fun tool comes along that feels like an improvement, there can be a spark of enthusiasm that may lead to an increased trust. This usually decreases after a while and I guess giving LLMs a bit more time of being around, the popularity in this field may also decrease.


I don't see why they shouldn't be sued by misleading people with such products

Great write up. I'd even double down on this statement: "You can opt in to chat history privacy". This is really "You can opt in to chat history privacy on a chat-by-chat basis, and there is no way to set a default opt-out for new chats".


This. It’s the same play with their browser. They are building the most comprehensive data profile on their users and people are paying them to do it.


Is this any worse than Google? Seems like the same business model.


There are lots of companies that do this. Doesn't make it right.

The real "evil" here is that companies like Meta, Google, and now OpenAI sell people a product or service that the customer thinks is the full transaction. I search with Google, they show me ads - that's the transaction. I pay for Chatgpt, it helps me understand XYZ - that's the transaction.

But it isn't. You give them your data and they sell it - that's the transaction. And that obscurity is not ethical in my opinion.


> You give them your data and they sell it - that's the transaction

I think that's the wrong framing. Let's get real: They're pimping you out. Google and Meta are population-scale fully-automated digital pimping operations.

They're putting everyone's ass on the RTB street and in return you get this nice handbag--err, email account/YouTube video/Insta feed. They use their bitches' data to run an extremely sophisticated matchmaking service, ensuring the advertiser Johns always get to (mind)fuck the bitches they think are the hottest.

What's even more concerning about OpenAI in particular is they're poised to be the biggest, baddest, most exploitative pimp in world history. Instead of merely making their hoes turn tricks to get access to software and information, they'll charge a premium to Johns to exert an influence on the bitches and groom them to believe whatever the richest John wants.

Goodbye democracy, hello pimp-ocracy. RTB pimping is already a critical national security threat. Now AI grooming is a looming self-governance catastrophe.


I think you just wrote a treatment for the next HBO Max sunday drama


And it's not only your data, that makes it much worse.

"You are the product" is a good catchphrase to make people understand. But actually when you search or interact with LLMs, you provide not only primary data about yourself but also about other people by searching for them in connection with specific search terms, by using these services from your friend's house which connects you to their IP-Address, by uploading photos of other people etc.

"You are the product and you come with batteries (your friends)."


Does Google have your medical records? It doesn't have mine.


They tried to at one point with "google health". They are still somewhat trying to get that information with the fitbit acquisition.


People email about their medical issues and google for medical help using Gmail/Google Search. So yes, Google has people's medical records.


If you hear me talking to someone about needing to pick up some flu medicine after work do you have my medical records?


No, but if I hear you telling someone you have the flu and are picking up flu medicine after work then I have a portion of your medical records. Why is it hard for people on HN to believe that normal people do not protect their medical data and email about it or search Google for their conditions? People in the "real world" hook up smart TV's to the internet and don't realize they are being tracked. They use cars with smart features that let them be tracked. They have apps on their phone that track their sentiments, purchases, and health issues... All we are seeing here is people getting access to smart technology for their health issues in such a manner that they might lower their healthcare costs. If you are an American you can appreciate ANY effort in that direction.


Maybe stop by to consider that knowing a few scattered facts and having your complete medical records is not the same thing, Hemingway.


how do you know they don't?


Since when is Google the model to emulate?


Depends on your goals. If you are starting a business and you see a company surpass the market cap of Apple, again, then you might view their business model as successful. If you are a privacy advocate then you will hate their model.


Well you said "is this any _worse_" (emphasis mine) and I could only assume you meant ethically worse. At which point the answer is kind of obvious because Google hasn't proven to be the most ethical company w.r.t. user data (and lots of other things).


since always


May your piece stay at the highest level of this comment section.


I get that impression too - but also it's HN and enthusiastic early adoption is unsurprising.

My concern, and the reason I would not use it myself, is the alto frequent skirting of externalities. For every person who says "I can think for myself and therefore understand if GPT is lying to me," there are ten others who will take it as gospel.

The worry I have isn't that people are misled - this happens all the time especially in alternative and contrarian circles (anti-vaxx, homeopathy, etc.) - it's the impact it has on medical professionals who are already overworked who will have to deal with people's commitment to an LLM-based diagnosis.

The patient who blindly trusts what GPT says is going to be the patient who argues tooth and nail with their doctor about GPT being an expert, because they're not power users who understand the technical underpinnings of an LLM.

Of course, my angle completely ignores the disruption angle - tech and insurance working hand in hand to undercut regulation, before it eventually pulls the rug.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: