Does anyone have an understanding - or intuition - of what the agentic loop looks like in the popular coding agents? Is it purely a “while 1: call_llm(system, assistant)”, or is there complex orchestration?
I’m trying to understand if the value for Claude Code (for example) is purely in Sonnet/Haiku + the tool system prompt, or if there’s more secret sauce - beyond the “sugar” of instruction file inclusion via commands, tools, skills etc.
You can reverse engineer Claude Code by intercepting its HTTP traffic. It's pretty fascinating - there are a bunch of ways to do this, I use this one: https://simonwillison.net/2025/Jun/2/claude-trace/
The beauty is in the simplicity:
1. One loop - while (true)
2. One step at a time - stopWhen: stepCountIs(1)
3. One decision - "Did LLM make tool calls? → continue : exit"
4. Message history accumulates tool results automatically
5. LLM sees everything from previous iterations
This creates emergent behavior where the LLM can:
- Try something
- See if it worked
- Try again if it failed
- Keep iterating until success
- All without explicit retry logic!
Generally, that's pretty much it. More advanced tools like Claude Code will also have context compaction (which sometimes isn't very good), or possibly RAG on code (unsure about this, I haven't used any tools that did this). Context compaction, to my understanding, is just passing all the previous context into a call which summarizes it, then that becomes to new context starting point.
I’m trying to understand if the value for Claude Code (for example) is purely in Sonnet/Haiku + the tool system prompt, or if there’s more secret sauce - beyond the “sugar” of instruction file inclusion via commands, tools, skills etc.