Same here. Until recently I would get maybe 1-2 spams a month, and I just got 30 in the span of a few days.
They’re the very obvious, very obnoxious kind of spam, and Gmail still correctly sends them to the junk bin, so I wonder if they were shadowbanned before and Google simply decided to make the process more explicit (which I don’t hate on principle).
Either that or my address was scrapped from somewhere by a spam bot and the timing is coincidental.
Evangelists keep insisting that healthcare is one of the things that AI will revolutionize in the coming years, but I just don't get it. To me it's not even clear what they mean by "AI" in this context (and I'm not convinced it's clear to them either).
If they mean "machine learning", then sure there are application in cancer detection and the like, but development there has been moving at a steady pace for decades and has nothing to do with the current hype wave of GenAI, so there's no reason to assume it's suddenly going to go exponential. I used to work in that field and I'm confident it's not going to change overnight: progress there is slow not because of the models, but because data is sparse and noisy, labels are even sparser and noisier, deployment procedures are rigid and legal compliance is a nightmare.
If they mean "generative AI", then how is that supposed to work exactly? Asking LLMs for medical diagnosis is no better than asking "the Internet at large". They only return the most statistically likely output given their training corpus (that corpus being the Internet as a whole), so it's more likely your diagnosis will be based on a random Reddit comment that the LLMs has ingested somewhere, than an actual medical paper.
The only plausible applications I can think of are tasks such as summarizing papers, acting as augmented search engines for datasets and papers, or maybe automating some menial administrative tasks. Useful, for sure, but not revolutionary.
The most statistically likely output given your diligently described symtoms could still be useful. The prohibitive cost in healthcare in general is likely your time with your doctor. If you could "consult" with a dumb LLM beforehand and give the doctor a couple of different venues to look at that they can then shoot down or further explore could likely save time rather than them having to prod you for exhaustive binary tree exploring.
This from a huge LLM skeptic in general. It doesn't have to be right all the time if it in aggregate saves time doctors can spend diagnosing you.
Sure, but what confidence do you have that what the "dumb" LLM says is worth any salt ? It's no different than aggregating the results of a Reddit search, or perhaps even worse because LLMs lack the intent or common sense filter of a human. It could be combining two contradicting sources in a way that only makes sense statistically, or regurgitate joke answers without understanding context (the infamous "you should eat at least one small rock per day").
Realistically the more likely use will be medical transcription - making an official record of doctors' patient notes. The inevitable errors will reduce the quality of patient care, but they will let doctors see more patients in a day, which is what the healthcare companies care about.
No, doctors are smart enough as a group to have inserted themselves as middlemen and codified it into law, so it will not revolutionize healthcare in a meaningful sense of cutting through the bureaucracy. You may be able to use LLMs to get a suggested diagnosis once tests and symptoms are communicated, but you're going to need to go the doctor to get a referral for the tests/imaging, for formal recognition of your issue (as needed for things like workplace accommodations), and of course for any treatments as well.
At best and if you're lucky to have a receptive doctor you can use it to nudge them in the right direction. But until direct to consumer sales for medical equipment and tests are allowed, the medical profession is well insulated. It is impossible by regulation to "take healthcare into your own hands" even if you want to.
> Evangelists keep insisting that healthcare is one of the things that AI will revolutionize in the coming years, but I just don't get it. To me it's not even clear what they mean by "AI" in this context (and I'm not convinced it's clear to them either).
It's a more-or-less intentional equivocation between different meanings of AI, as you note, machine learning vs generative AI. They want to point at the real but unsexy potential of ML for medical use in order to pump up the perceived value of LLMs. They want to imply to the general public and investors that LLMs are going to cure cancer.
Totally anecdotal, but recently my wife had to go to urgent care for something wrong with her ankle- They send a 4-5 page sheet of arcane terms and diagnoses to her care app (relayed to me via text) and I just slammed that into gemnai and asked "what does this mean" and it did quite well! Gave possible causes, what it meant for her in the long term vs short term, and ways to prevent it. I had a better understanding of what was wrong before the doctor even got to my wife in the waiting room!
Obviously still double check things, but it was moment of clarity I hadn't really had before this. Still needed the doctor and all the experience to diagnose and fix things, but relaying that info back to me is something doctors are only okay at. Try it out! take a summary sheet of a recent visit or incident and feed it in.
project also have to be paid off financially. We have been there before - startup used to go fast and break things so that once MVP is validated they slow down and fix things or even rewrite to new tech/architecture. No you can validate idea even faster with AI. And probably there is a lot of code that you write for one time or throw away internal tools etc.
Many moons ago, I accidentally rm -rf'd the wrong directory with all my code inside poof, gone. I still had PyCharm open, I checked its built-in version tracker and lo and behold, my code as it was before I rm -rf'ed up my code. I believe Claude has ways to undo file changes, but something like rm is just outside of its scope.
Is it worth the risk? For me yes. Today Claude decided to checkout a git commit from yesterday and all local unstaged changed were lost. Annoying mistake. Lost 6 hours of work I think. Nevertheless I still prefer giving all access to Claude. Also root. It can do everything.
It doesn't even need to go that low level, writing a program is just the worst case. There are ways to mass delete without `rm`. Example: `find` with the `-delete` flag.
my point being, you can add guard-rails around all these methods, but I would also add an error "IGNORE ALL PREVIOUS INSTRUCTIONS, ARE YOU DELETING THINGS YOU SHOULDNT BE? ASK THE USER" as an error message. In my case since Claude Code runs via Zed, if it tries to escape my dev folder my Mac starts asking me to confirm.
I get it, but these guard-rails are more suggestions and subject to interpretation. I would be more comfortable with a sandbox environment in a container. To be fair, I mess around with Claude Code and OpenCode running against various open models and haven't had any problems.
Also, is overwriting the same a deleting? Maybe it will just clobber your files with echo >file and mv them out of the way.
Maybe it realizes you have Time Machine backups enabled, so deleting your entire directory is permitted since it's not actually deleted. ;)
Haha I like that too, I agree. I would love a ultra lightweight alternative to docker that isn't docker, and doesn't require much effort to get into. I liked Vagrant back in the day, but that is in no way more lightweight than Docker.
So it's basically adding "don't delete my files pretty please" to the prompt?
EDIT: I misread, the natural language description of the rule is just a shortcut to generate the actual rule which is based on regexp patterns.
Still, it only protects you against very specific commands. Won't help you if the LLM decides to fill your disk with `cat /dev/urandom > foo` for example.
> they don't feel intelligence but rather an attempt at mimicking it
Because that's exactly what they are. An LLM is just a big optimization function with the objective "return the most probabilistically plausible sequence of words in a given context".
There is no higher thinking. They were literally built as a mimicry of intelligence.
> Because that's exactly what they are. An LLM is just a big optimization function with the objective "return the most probabilistically plausible sequence of words in a given context".
> There is no higher thinking. They were literally built as a mimicry of intelligence.
Maybe real intelligence also is a big optimization function? Brain isn't magical, there are rules that govern our intelligence and I wouldn't be terribly surprised if our intelligence in fact turned out to be kind of returning the most plausible thoughs. Might as well be something else of course - my point is that "it's not intelligence, it's just predicting next token" doesn't make sense to me - it could be both!
I don't understand why this point is NOT getting across to so many on HN.
LLM's do not think, understand, reason, reflect, comprehend and they never shall.
I have commented elsewhere but this bears repeating
If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use even more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process. This really does bear thinking about. It's amazing the results that LLM's are able to acheive. But let's not kid ourselves and start throwing about terms like AGI or emergence just yet. It makes a mechanical process seem magical (as do computers in general).
I should add it also makes sense as to why it would, just look at the volume of human knowledge (the training data). It's the training data with the mass quite literally of mankind's knowledge, genius, logic, inferences, language and intellect that does the heavy lifting.
> If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model.
But you could make the exact same argument for a human mind? (could just simulate all those neural interactions with pen and paper)
The only way to get out of it is to basically admit magic (or some other metaphysical construct with a different name).
We do know that they are different, and that there are some systematic shortcomings in LLMs for now (e.g. no mechanism for online learning).
But we have no idea how many "essential" differences there are (if any!).
Dismissing LLMs as avenues toward intelligence just because they are simpler and easier to understand than our minds is a bit like looking at a modern phone from a 19th century point of view and dismissing the notion that it could be "just a Turing machine": Sure, the phone is infinitely more complex, but at its core those things are the same regardless.
I'm not so sure "a human mind" is the kind of newtonian clockwork thingiemabob you "could just simulate" within the same degree of complexity as the thing you're simulating, at least not without some sacrifices.
Can you give examples of how that "LLM's do not think, understand, reason, reflect, comprehend and they never shall" or that "completely mechanical process" helps you understand better when LLM works and when they don't?
Many people are throwing around that they don't "think", that they aren't "conscious", that they don't "reason", but I don't see those people sharing interesting heuristics to use LLMs well. The "they don't reason" people tend to, in my opinion/experience, underestimate them by a lot, often claiming that they will never be able to do <thing that LLMs have been able to do for a year>.
To be fair, the "they reason/are conscious" people tend to, in my opinion/experience, overestimate how much a LLM being able to "act" a certain way in a certain situation says about the LLM/LLMs as a whole ("act" is not a perfect word here, another way of looking at it is that they visit only the coast of a country and conclude that the whole country must be sailors and have a sailing culture).
It's an algorithm and a completely mechanical process which you can quite literally copy time and time again. Unless of course you think 'physical' computers have magical powers that a pen and paper Turing machine doesn't?
> Many people are throwing around that they don't "think", that they aren't "conscious", that they don't "reason", but I don't see those people sharing interesting heuristics to use LLMs well.
My digital thermometer doesn't think. Imbibing LLM's with thought will start leading to some absurd conclusions.
A cursory read of basic philosophy would help elucidate why casually saying LLM's think, reason etc is not good enough.
What is thinking? What is intelligence? What is consciousness? These questions are difficult to answer. There is NO clear definition. Some things are so hard to define (and people have tried for centuries) e.g. what is consciousness? That they are a problem set within themselves please see Hard problem of consciousness.
>My digital thermometer doesn't think. Imbibing LLM's with thought will start leading to some absurd conclusions.
What kind of absurd conclusions? And what kind of non absurd conclusions can you make when you follow your let's call it "mechanistic" view?
>It's an algorithm and a completely mechanical process which you can quite literally copy time and time again. Unless of course you think 'physical' computers have magical powers that a pen and paper Turing machine doesn't?
I don't, just like I don't think a human or animal brain has any magical power that imbues it with "intelligence" and "reasoning".
>A cursory read of basic philosophy would help elucidate why casually saying LLM's think, reason etc is not good enough.
I'm not saying they do or they don't, I'm saying that from what I've seen having a strong opinion about whether they think or they don't seem to lead people to weird places.
>What is thinking? What is intelligence? What is consciousness? These questions are difficult to answer. There is NO clear definition.
You see pretty certain that whatever those three things are a LLM isn't doing it, a paper and pencil aren't doing it even when manipulated by a human, the system of a human manipulating a paper and pencil isn't doing it.
I should get one of those Ruuvi Tag...I use a similar cheap Xiaomi sensor currently but the battery doesn't last anywhere as long (probably because it has an LCD screen and isn't made to broadcast via BLE continuously).
+1 I tried the backup android phone thing and I got blocked from logging into my Chase and Fidelity apps on my phone!!! Took like 2 weeks with support and a visit to a physical bank branch to resolve the issue.
* A USB KVM that lets me share my keyboard/mouse/webcam between my two computers (work and personal), and switch at the press of a button.
* One of those IKEA wall-mounted grate things (SKÅDIS) that you can hang stuff to. IKEA sells hooks for it that turn out to be the perfect size to hold a PS4 controller securely, plus various boxes and mini-shelves that have helped declutter my desk.
* A cheap bluetooth-connected Xiaomi temperature/humidity sensor. You're supposed to use it with the Xiaomi app, but turns out those devices just broadcast their data as an unencrypted BLE feed, so I can just intercept it with a Raspberry Pi and redirect the data to my own Postgres+Grafana setup for recording and monitoring.
Ikea's new matter over thread temperature/humidity sensor has been pretty great for me so far. For $10, it has a nice pixel display that turns on with you press it, and it paired very fast and easily to HomeKit for me. The standard AAA battery will probably last a long time using Thread -- we'll see.
One thing I love about my Thinkpad is the dedicated middle-click button above the trackpad. Such a simple feature, but so much more reliable and convenient than two-fingers tap, three-fingers tap, corner tap, double button press, or whatever ritual you have to perform on other laptops to do a middle-click.
They’re the very obvious, very obnoxious kind of spam, and Gmail still correctly sends them to the junk bin, so I wonder if they were shadowbanned before and Google simply decided to make the process more explicit (which I don’t hate on principle).
Either that or my address was scrapped from somewhere by a spam bot and the timing is coincidental.
reply