Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> we should judge a thing to be a duck or not, based on it having the external appereance and outward activity of a duck, and ignoring any further subleties, intent, internal processes, qualia, and so on.

and the point here is we should not ignore further subtleties, intent, internal process, qualia, etc because they are extremely relevant to the issue at hand.

Treating GPT like a malevolent actor that tells intentional lies is no more correct than treating it like a friendly god that wants to help you.

GPT is incapable of wanting or intending anything, and it's a mistake to treat it like it does. We do care how it got to produce incorrect information.

If you have a robot duck that walks like a duck and quacks like a duck and you dust off your hands and say "whelp that settles it, it's definitely a duck" then you're going to have a bad time waiting for it to lay an egg.

Sometimes the issues beyond the superficial appearance actually are important.



>and the point here is we should not ignore further subtleties, intent, internal process, qualia, etc because they are extremely relevant to the issue at hand.

But the point is those are only relevant when trying to understand GPTs internal motivations (or lack thereof).

If we care for the practical effects of what it's spits out (the function the same as if GPT has lied to us), then calling them "hallucinations" is as good as calling them "lying".

>We do care how it got to produce incorrect information.

Well, not when trying to access whether it's true or false, and whether we should just blindly trust it.

From that practical aspect, most people care about (than about whether it has "intentions"), we can ignore any of its internal mechanics.

Thus treating it like it "beware, as it tends to lie", will have the same utility for most laymen (and be a much easier shortcut) than any more subtle formulation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: