Hacker Newsnew | past | comments | ask | show | jobs | submit | csar's commentslogin

The Chili Peppers long ago settled the ethics of coyote culling, but your attitude is cavalier and thoughtless.

For those of us living in suburbs, young children running just a few steps ahead (e.g. from the house to the other end of the driveway) can be in significant danger from coyotes.

Your hyperbole and cursing are not a great match for this forum and you should reconsider them or leave.


Be a better parent then. Sounds like a skills issue on your end.

Yeah I started in 98 or so and stopped in 2009. I should check back in.


For new-ish projects it should give you some crazy speed up out of the box.

For large codebases (my own has 500k lines and my company has a few tens of millions) you need something better like RPI.

If nothing else just being able to understand code questions basically instantly should give you a large speed up, even without any fancy stuff.


It’s told to only use it if relevant because most people write bad ones. Someone should write a tool to assess CLAUDE.md quality.


Claude Code is better out of the box, so all that other stuff is orthogonal or optional. If you eg want to give your agent access to your company’s Notion docs you need a skill.


You should never let context get that high unless you’re doing really basic things. Somewhere 40-60% is generally the time to start thinking about exits for tougher tasks. Get out in the 60s.


I keep work chunks small, which is why I can hit 90%. If I do I have a large task like a big planning effort, yes I’d start fresh.


Getting feedback on a plan or implementation is valuable because you get a fresh set of eyes. Using multiple models may help though it always feels a bit silly to me (if nothing else you’re increasing non-determinism because you know have to understand 2 LLM’s quirks).

But the “playing house” approach of experts is somewhere between pointless and actively harmful. It was all the rage in June and I thought people abandoned that later in the summer.

If you want the model to eg review code instead of fixing things, or document code without suggesting improvements (for writing docs), that’s useful. But there’s. I need for all these personas.


The way it works is that each agent think independently, discuss the solution and each agent opinion then one will synthesize a solution.


I understand. My point is that the personas are generally not a good idea and that there are much simpler and more predictable ways of getting better results.


I get where you're coming from, especially since role playing was so vital in early models in a way that is no longer necessary, or even harmful; however, when designing a complex system of interactions, there's really no way around it. And as humans we do this constantly, putting on a different hat for different jobs. When I'm wearing my developer hat, I have to reason about the role of each component in a system, and when I use an agent to serve in that role, by curating it's context and designating rules for how I want it to behave, I'm assigning it a persona. What's more, I may prime the context user and assistant messages, as examples of how I want it to respond. That context becomes the agent's personality--it's persona.


Spot on


Love the throwbacks.

I wasted a few minutes earlier today trying to find the original website for the Cocoa class that Tristan helped set up a few years before this one got started.


Me: “This link can’t possibly be about what I think it might be about.” Me, seconds later: “Yes it is!!”


If you're getting AI slop you're doing it wrong. You should be getting high quality code. Of course that's easier said than done, but AI slop is a sign that things have gone off the rails.


I have scarcely gotten decent code. The best a model has spat out is 'fine', which is ok for menial tasks.

I have yet to see anyone show me an AI generated project that I'd be willing to put into production.

IDK, I feel like 'vibe coders' or people who heavily rely on LLM's have allowed their skills (if they ever existed) to atrophy such that they're generally not great at assessing the output from models.


Spare me the koolaid.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: