Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Telling the LLM to walk through the response, step by step is a prompt engineering thing though.


It is. Studied in the literature under the name "chain of thought" (CoT), I believe. It's still subject to the limitations I mentioned. (Though the output is more persuasive to a human even when the answer is the same, so you should be careful.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: