> A blade of grass has more humanity and is more deserving of respect than anything being referred to as AI does.
Emphatically disagree.
Even ignoring the obvious absurdity in this statement by pointing out that an LLM is emulating a human (quite well!) and a blade of grass is not:
I don't trust any human who can interact with something that uses the same method of communication as a human, and for all intents and purposes communicates like a human, and not feel any instinct to treat it with respect.
This is the kind of mindset that leads to dehumanizing other humans. Our brain isn't sophisticated enough to actually compartmentalize that - building the habit that it's right to treat something that talks like a sapient as if it deserves zero respect is going to have negative consequences.
Sure, you can believe it's a just a tool, and consciously let yourself treat it as one. But treat it like an incompetent intern, not a slave.
I think ascribing humanity to to something that isn’t human is far more dehumanizing to actual real life humans than the alternative. You are taking away actual people’s humanity if you’re giving it to anything we call AI.
I am capable of distinguishing between talking to another person and talking to an LLM and I don’t think that is hard to do.
I don’t think there is any other word than delusional to describe someone who thinks LLMs should be treated as humans.
Genuine question, why do you think this is so important to clarify?
Or, more crucially, do you think this statement has any predictive power? Would you, based on actual belief of this, have predicted that one of these "agents", left to run on its own would have done this? Because I'm calling bullshit if so.
Conversely, if you just model it like a person... people do this, people get jealous and upset, so when left to its own devices (which it was - which makes it extra weird to assert it "it just follows human commands" when we're discussing one that wasn't), you'd expect this to happen. It might not be a "person", but modelling it like one, or at least a facsimile of one, lets you predict reality with higher fidelity.
I'll be honest, as someone not familiar with Haskell, one of my main takeaways from this article is going down a rabbit hole of finding out how weird Haskell is.
The casualness at which the author states things like "of course, it's obvious to us that `Int -> Void` is impossible" makes me feel like I'm being xkcd 2501'd.
If you spend your life talking about bool having two values, and then need to act as if it has three or 256 values or whatever, that's where the weirdness lives.
In C, true doesn't necessarily equal true.
In Java (myBool != TRUE) does not imply that (myBool == FALSE).
Maybe you could do with some weirdness!
In Haskell:
Bool has two members: True & False. (If it's True, it's True. If it's not True, it's False).
Unit has one members: ()
Void has zero members.
To be fair I'm not sure why Void was raised as an example in the article, and I've never used it. I didn't turn up any useful-looking implementations on hoogle[1] either.
What were you expecting to find? A function which returns an empty type will always diverge - ie there is no return of control, because that return would have a value that we've said never exists. In a systems language like Rust there are functions like this for example std::process::exit is a function which... well, hopefully it's obvious why that doesn't return. You could imagine that likewise if one day the Linux kernel's reboot routine was Rust, that too would never return.
It's not like sleeping pills at all actually. Sleeping pills have a huge dependence and tolerance factor. Antidepressants, generally, do not. Once you find one that works, they keep working effectively forever.
It's actually like statins. Ideally, a doctor will recommend diet changes in addition to the pills. However, relying on lifestyle interventions almost never is effective, And the more we learn about it, the more we realize that cholesterol is mostly genetic based rather than diet based anyway. So the most effective thing they can do is say "here, take these indefinitely". And thank God they do because it saves thousands of lives annually.
For many people with depression, a neurochemical imbalance is the root cause. Just like with statins, addressing it means taking some pills.
Probably gonna get buried at the bottom of this thread, but:
There's a good chance they just asked GPT5.2 for a name. I know for a fact that when some of the OpenAI models get stuck in the "weird" state associated with LLM psychosis, three of the things they really like talking about are spirals, fractals, and prisms. Presumably, there's some general bias toward those concepts in the weights.
I can't take this website seriously if it thinks that a difficult-to-reproduce syncing bug that has been around for a decade can be easily solved (and tested!) in a week and a half. That's the kind of delusional time estimate even my CS-illiterate boss wouldn't make.
And at the right time too. At almost every point before that, a gas powered engine was justified for duration and power, but the significant advances in both batteries and electric motors in the past 10-20 years have finally made them good enough that ICE tools are totally unjustified.
With all due respect, I hope you never touch the development of any piece of software any of my relatives or friends ever has to use.
Good UX is one of the most important-yet-underserved areas in the tech industry (the topic of this site), and this sort of attitude goes beyond being smug and naive to being actively harmful. Your goal should always be to make things easier and with as little friction as necessary.
> This application requires passkey with PRF extension support for secure encryption key storage. Your browser or device doesn't support these advanced features.
> Please use Chrome 116+, Firefox 139+, or Edge 141+ on a device with platform authentication (Face ID, Touch ID, Windows Hello, etc.).
(Running Chrome 143)
So... does this just not support desktops without overpriced webcams, or am I missing something?
Emphatically disagree.
Even ignoring the obvious absurdity in this statement by pointing out that an LLM is emulating a human (quite well!) and a blade of grass is not:
I don't trust any human who can interact with something that uses the same method of communication as a human, and for all intents and purposes communicates like a human, and not feel any instinct to treat it with respect.
This is the kind of mindset that leads to dehumanizing other humans. Our brain isn't sophisticated enough to actually compartmentalize that - building the habit that it's right to treat something that talks like a sapient as if it deserves zero respect is going to have negative consequences.
Sure, you can believe it's a just a tool, and consciously let yourself treat it as one. But treat it like an incompetent intern, not a slave.
reply