Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does anyone have an understanding - or intuition - of what the agentic loop looks like in the popular coding agents? Is it purely a “while 1: call_llm(system, assistant)”, or is there complex orchestration?

I’m trying to understand if the value for Claude Code (for example) is purely in Sonnet/Haiku + the tool system prompt, or if there’s more secret sauce - beyond the “sugar” of instruction file inclusion via commands, tools, skills etc.



Claude Code is an obfuscated javascript app. You can point Claude Code at it's own package and it will pretty reliably tell you how it works.

I think Claude Code's magic is that Anthropic is happy to burn tokens. The loop itself is not all that interesting.

What is interesting is how they manage the context window over a long chat. And I think a fair amount of that is serverside.


> Claude Code is an obfuscated javascript app. You can point Claude Code at it's own package and it will pretty reliably tell you how it works.

This is why I keep coming back to Hacker News. If the above is not a quintessential "hack", then I've never seen one.

Bravo!


I've been running the obfuscated code through Prettier first, which I think makes it a bit easier for Claude Code to run grep against.


No need to take guesses - the VS Code GitHub Copilot extension is open source amnd has an agent mode with tool calling:

https://github.com/microsoft/vscode-copilot-chat/blob/4f7ffd...


You can reverse engineer Claude Code by intercepting its HTTP traffic. It's pretty fascinating - there are a bunch of ways to do this, I use this one: https://simonwillison.net/2025/Jun/2/claude-trace/


Wow it seems almost designed to burn through tokens.

I wish we had a version that was optimized around token/cost efficiency


I thought this was informative: https://minusx.ai/blog/decoding-claude-code/


https://github.com/sst/opencode opencode is open source. Here's a session I started but haven't had time to get back to which is using opencode to ask it about how the loop works https://opencode.ai/s/4P4ancv4

The summary is

The beauty is in the simplicity: 1. One loop - while (true) 2. One step at a time - stopWhen: stepCountIs(1) 3. One decision - "Did LLM make tool calls? → continue : exit" 4. Message history accumulates tool results automatically 5. LLM sees everything from previous iterations This creates emergent behavior where the LLM can: - Try something - See if it worked - Try again if it failed - Keep iterating until success - All without explicit retry logic!


Generally, that's pretty much it. More advanced tools like Claude Code will also have context compaction (which sometimes isn't very good), or possibly RAG on code (unsure about this, I haven't used any tools that did this). Context compaction, to my understanding, is just passing all the previous context into a call which summarizes it, then that becomes to new context starting point.


Have a look at https://github.com/anthropics/claude-code/tree/main/plugins/... to see how a fairly complex workflow is implemented




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: