Hacker Newsnew | past | comments | ask | show | jobs | submit | mihau's commentslogin

I agree that Claude Code is buggy as hell, but:

> Let's not talk about the feature parity with other agents.

What do you mean feature parity with other agents? It seems to me that other CLI agents are quite far from Claude Code in this regard.


Which other CLI agents are that? Because I've found OpenCode to be A LOT better than Claude-Code.


Whats better with opencode? Never tried it. I like that claude code has double escape, shift + tab, team of agents


I haven't used opencode but pi agent runs rings around claude code. Never eats tons of CPU on big outputs, no flickering, open source, tree-based context instead of claude's linear context, easy to toggle collapsing/expanding tool outputs, built for extension with runtime reloading of extensions and skills, etc. You can easily build your own amp-code like handoff mechanism, customize the UI (i see models' edit diffs syntax-highlighted with delta, and just added a keybind to list session-edited files + files from git status in fzf), etc.

Meanwhile with Claude Code I've had to get claude to decompile the editor (extract JS from the bun executable) _twice_ to diagnose weird things like why some documented config flags were not taking effect.

Opus is great - but I'd rather use a different model than be forced back into Claude Code.


The great force of claude code is that you can use claude sub, you can’t with pi unfortunately


same


@simonw wen pelican


FYI it's behind a feature flag (aka experiment). Just in case you rarely use experiments in DevTools:

DevTools -> Settings (cog, top right) -> Experiments -> Search for "Protocol Monitor" -> Check the checkbox


why? M3 Ultra already had 800 GB/s (6400 gbps) memory bandwidth


But what did the base M3 have? Why compare to different categories?

Edit: Apparently 100GB/s, so a 1.5x improvement over the M3 and a 1.25x improvement over the M4. That seems impressive if it scales to Pro, Max and Ultra.


And that was already impressive. High-end gaming computers with dual-channel DDR5 only reach ~100GB/s of CPU memory bandwidth.


High end gaming computers have far more memory bandwidth in the GPU, though. The CPU doesn’t need more memory bandwidth for most non-LLM tasks. Especially as gaming computers commonly use AMD chips with giant cache on the CPU.

The advantage of the unified architecture is that you can use all of the memory on the GPU. The unified memory architecture wins where your dataset exceeds the size of what you can fit in a GPU, but a high end gaming GPU is far faster if the data fits in VRAM.


The other advantage is you don’t have to transfer assets across slow buses to get it into that high speed VRAM.


Right, but high-end gaming GPUs exceed 1000GB/s and that's what you should be comparing to if you're interested in any kind of non-CPU compute (tensor ops, GPU).


And you can find high-end (PC) laptops using LPDDR5x running at 8533 MT/s or higher which gives you more bandwidth than DDR5.


It feels to me that the OP on the forum expects this to work: "read this existing function, then read my mind and do stuff" (probably followed by "do better").

It still takes a lot of practice to get good at prompting, though.


Literally my manager


> you can overengineer your prompt to try get them to ask more questions

why overengineer? it's super simple

I just do this for 60% of my prompts: "{long description of the feature}, please ask 10 questions before writing any code"


Kinda off-topic but I love the quality of images used in this article. Even memes are HD. Quite rare as people tend to use whatever meme creator provides - super-pixelated, low-quality source image with text slapped on top.


Yep, it's just bad. Physics feels totally off.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: