Unfortunately reasoning ability depends on (or is enabled by) information intake during training. A model will know better what to search for and how to interpret it if the information was part of the training. So there is a trade off. Still I think the question is a practical one. Perhaps there are ideas to focus training on a) reasoning / conceptual modeling and b) reliance on external memory (search etc.) rather than internal memorization.
A very simple cli tool, consuming basic txt format. You can use it in a second window while waiting for your compilation to finish.
Recently I’ve been also experimenting with defining QA pairs in my note files (in a special section). I then use a custom function in emacs to extract these pairs and push to a file as well as Anki.
It’d be probably useful to include this very comment in your system prompt or a separate file which you ask the coding agent to read at the beginning of each session.
It’s great to read in the comments about experiences of others with vibe coding. But I also feel like lots of opinions are not coming from actual experience, or “serious” attempts at vibe coding, and more from theoretical deliberations. I might be wrong.
Here are some of my own high-level experiences / thoughts:
- Perhaps contrary to popular belief I think vibe coding will bring the best software / system architects. This is due to massively shortened feedback loop between architectural idea and seeing it in action, easiness with which it can be changed, and the ability to discuss it at any moment.
- We’re not really coding anymore. This is a new role, not a role of a senior dev reviewing PRs of junior devs. Devs are just best suited (currently) to take on this new role. I came to realization that if you’re reviewing all generated code in detail you’re doing it wrong. You just shifted bottleneck by one step. You’re still coding. You should skim if the code is in line with your high-level expectation and then make LLM maintain an architecture doc and other docs that describe what and how you’re building (this is the info you should know in detail). You can do audits with another LLM whether the implementation is 100% reflecting the docs, you can chat with LLM about implementation at any moment if you ever need. But you should not know the implementation the way you know it today. The implementation became the implementation detail. The whole challenge is to let go of the old and embrace and search for efficiency in the new setup.
- Connected to the above: reading through LLM outputs is a massive fatigue. You are exhausted after the day, because you read hundreds of pages. This is a challenge to fight. You cannot unlock full potential here if you aim at reading and reviewing everything.
- Vibe coding makes you work on the problem level much more. I never liked the phrase “ideas are cheap”. And now finally I think the tides will turn, ideas are and will be king.
- Devil is in the detail, 100%. People with ability to see connections, distill key insights, communicate and articulate clearly, think clearly, are the ones to benefit.
I implemented HRM for educational purposes and got good results for path finding. But then I started to do ablation experiments and came to the same conclusions as the ARC-AGI team (the HRM architecture itself didn’t play a big role): https://github.com/krychu/hrm
This was a bit unfortunate. I think there is something in the idea of latent space reasoning.
> The problem with c is that you must have a comprehensive dictionary in your brain with tons of corner cases to know what is or is not undefined in any given compiler setting.
The cases of undefined behavior in the C standard are independent of compiler settings or options.
> If C could have a consistent set of rules …
The C language has a well-defined standard, but the presence of undefined behavior is a deliberate aspect of that standard.