Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> personal history, your anxieties

I asked ChatGPT to isolate individual chats so as to not bleed bias across all chats which funnily enough it admitted doing so.

When I asked Grok, it said it is set as the default out of the box.



And how would you possibly believe that!

If it had access to its own settings, and it wasn’t making things up, and it wasn’t lying… but why would it be trained on any of these things?


Because I would test if it's keeping its word, like periodically or spontaneously asking both whether it can _import_ the context from one chat to another or, judging the conversational flow between topics.


You’re attributing consistency and theory of mind to something that has neither. It will say yes sometimes, despite not being able to do so!


Maybe we're on different wavelengths on this issue but practically speaking, it hasn't spilled or splattered contexts from different chat topics.. yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: