Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

i find its the opposite, LLMs can be made to agree with anything.... largely because that agreeability is in their system prompt


Yeah, this. Every conversation inevitably ends with "you're absolutely right!" The number of "you're absolutely right"s per session is roughly how I measure model performance (inverse correlation).


Ha, touche!




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: