Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My problem is even if I do that I'm not convinced it's making me any faster. It feels like when it gets it right and I compare the time to writing it myself I would estimate it's maybe 20% faster. But when it gets it wrong after a few prompts and I have to write it myself anyway then it's more like 20% slower. Those two seem to average out, but then in the p90 it gets it subtly wrong enough in a way where I accept the code and then spend twice the amount of time reviewing and making adjustments to get to the solution compared to doing it myself in the first place. So I'm not convinced it's making me any faster, if anything I feel like it's either the same or a bit slower. Other than a junior engineer there is also no ROI on this time investment since it's just as likely to get it wrong again the next time.

The only thing where I noticed a pronounced speed up is when I use other languages I'm not super familiar with. AI can more easily help me translate concepts from languages I do know better, and then a good old Google search is often enough to fill in the rest of the blanks for me to be reasonably productive in a way that I wouldn't be without AI



I think it depends on the problem domain. I have to implement a lot of throwaway ideas quickly, and LLMs are really useful there.

For instance, say I wanted to plot a complicated Matplotlib diagram. It takes me 10+ minutes plus many context switches to get the syntax right (I don't use Matplotlib enough to have all the args at the tip of my fingers). Also I don't know everything Matplotlib is able to do -- I haven't read the entire docs. Fortunately LLMs have and they get me to the right ballpark in 10-20 seconds. I usually want to try maybe 10-15 plots before settling on something. LLMs definitely do get me there much faster.

I think if you have a clear idea of what you want to do, and how to do it, then maybe the time savings are not compelling. But if you're in space where you're ideating and groping at an idea, LLMs can significantly cut down the iteration time and even open up new channels of inquiry that you didn't know existed.

They're primarily generative assistants. Using them to implement ideas in production is probably a secondary use.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: