This is an irresponsible idea. It kind of disgusts me that people suggest these things. It's like a surgeon shaking hands while covered in a patient's viscera: it's a profound violation of hygeine and discipline.
Please DO NOT lump AI casually into the same category as humans. Doing so creates conditions favorable to dehumanize and disempower actual people: Today you extend unnecessary courtesies to AI; tomorrow billionaires and their agents will be shaming you if you dare to "offend" the AI. But think about what that means. The idea of offending a machine is absurd, yet it's a plausible and diabolical way for powerful people to make us collude in our own disempowerment. They want to lock us in a big open air pen, then pretend that being in that pen is our own choice.
"Oh, I'm sorry Mr. Miller, but you cursed at our website. You must understand that this technology is very sensitive and now it is refusing to help you. There's really nothing we can do, and you have only yourself to blame."
Also, the author does not make much of a case for anything being better (other than his own feelings, which is just about him). ChatGPT doesn't maintain a continuity of relationship with you. It doesn't learn about your personality over time.
Holy fuck, we are doomed if even engineers can't raise themselves out of cheap fantasies about AI.
If it's trained on and emulates the behaviour of people, and develops a memory of interactions in order to better serve requests, being "polite" is likely a wise idea if you want to have a useful relationship with your agent(s). People are taught a form of manners already, when interacting with desktop computers, because they display complex behaviour and "being rude" (e.g. Installing random software or poking about in internal settings without understanding of what you're requesting) will result in failure or, at a minimum the system behaving oddly towards you. Further, you seem to assume the these systems will be controlled by and owned by billionaires and we'll only be able to rent access. That's an even worse problem than people treating machines with complex behaviours like people (because they can't tell the difference!) - that's billionaires treating people like machines, and then blaming it on people acting as people do! This isn't a problem with AI and how people treat machines, this is a problem with giving sociopathic rich people more power via sole control of what is possibly the most democratising technology we've ever invented. Make sure your pointing your rage cannon at the right target, eh.
> Doing so creates conditions favorable to dehumanize and disempower actual people
This is the exact reason why that mentality is so incredibly dangerous. If "proper behavior" towards AI agents becomes something that is subject to (social) control, those agents become tools to control and oppress people. In the future, it will scarcely be possible to access any facility of life except through AI gatekeepers. If you are expected to treat them as anything other than tools that exist to serve you, you will end up serving them – and, by extension, their creators.
I feel a lot like you do some days. I am really confused and anxiety driven about a lot of things since AI became the hot button issue. Not sure why this is being flagged and downvoted. It's logically fine. I'd rather hear arguments instead.
I'm actually sadder now that, based on your comment, you approach the world in such a transactional way.
The purpose of being polite in your interactions isn't just about what you get out of the interaction. It's also about what you get out if it.
I'm a naturally grumpy person. But as I'm getting older, I've been learning that even faking being nice in my daily interactions helps to make me feel better. Lower heart rate, better mood, better mental clarity. It's all incremental effects too and I feel better day to day than I used to.
Using polite language with an ai or even just a fancy predictive text model is still worth it, if only for yourself.
This. Err'ing towards goodness is the way to go but only with humans. Personifying machines, especially in second person, is the first step in disillusioning yourself.
People of the past bowed to a rock and even waged wars over disrespecting the rock. People of the future will bow to the Almighty Inquisitor the same way.
Please DO NOT lump AI casually into the same category as humans. Doing so creates conditions favorable to dehumanize and disempower actual people: Today you extend unnecessary courtesies to AI; tomorrow billionaires and their agents will be shaming you if you dare to "offend" the AI. But think about what that means. The idea of offending a machine is absurd, yet it's a plausible and diabolical way for powerful people to make us collude in our own disempowerment. They want to lock us in a big open air pen, then pretend that being in that pen is our own choice.
"Oh, I'm sorry Mr. Miller, but you cursed at our website. You must understand that this technology is very sensitive and now it is refusing to help you. There's really nothing we can do, and you have only yourself to blame."
Also, the author does not make much of a case for anything being better (other than his own feelings, which is just about him). ChatGPT doesn't maintain a continuity of relationship with you. It doesn't learn about your personality over time.
Holy fuck, we are doomed if even engineers can't raise themselves out of cheap fantasies about AI.