Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are tricks one can use to mitigate some of the pitfalls when using either a conversational LLM or a code assistant.

They emerge from the simple assumptions that:

- LLMs fundamentally pattern match bytes. It's stored bytes + user query = generated bytes.

- We have common biases and instinctively use heuristics. And we are aware of some of them. Like confirmation bias or anthropomorphism.

Some tricks:

1. Ask for alternate solutions or let them reword their answers. Make them generate lists of options.

2. When getting an answer that seems right, query for a counterexample or ask it to make the opposite case. This can sometimes help one to remember that we're really just dealing with clever text generation. In other cases it can create tension (I need to research this more deeply or ask an actual expert). Sometimes it will solidify one of the two, answers.

3. Write in a consistent and simple style when using code assistants. They are the most productive and reliable when used as super-auto-complete. They only see the bytes, they can't reason about what you're trying to achieve and they certainly can't read your mind.

4. Let them summarize previous conversations or a code module from time to time. Correct them and add direction whenever they are "off", either with prompts or by adding comments. They simply needed more bytes to look at to produce the right ones at the end.

5. Try to get wrong solutions. Make them fail from time to time, or ask too much of them. This develops a intuition for when these tools work well and when they don't.

6. This is the most important and reflected in the article: Never ask them to make decisions, for the simple fact that they can't do it. They are fundamentally about _generating information_. Prompt them to provide information in the form of text and code so you can make the decisions. Always use them with this mindset.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: