Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I definitely respect David's opinion given his caliber, but pieces like this make me feel strange that I just don't have a burning desire to use them.

Like, yesterday I made some light changes to a containerized VPN proxy that I maintain. My first thought wasn't "how would Claude do this?" Same thing with an API I made a few weeks ago that scrapes a flight data website to summarize flights in JSON form.

I knew I would need to write some boilerplate and that I'd have to visit SO for some stuff, but asking Claude or o1 to write the tests or boilerplate for me wasn't something I wanted or needed to do. I guess it makes me slower, sure, but I actually enjoy the process of making the software end to end.

Then again, I do all of my programming on Vim and, technically, writing software isn't my day job (I'm in pre-sales, so, best case, I'm writing POC stuff). Perhaps I'd feel differently if I were doing this day in, day out. (Interestingly, I feel the same way about AI in this sense that I do about VSCode. I've used it; I know what's it capable of; I have no interest in it at all.)

The closest I got to "I'll use LLMs for something real" was using it in my backend app that tracks all of my expenses to parse pictures of receipts. Theoretically, this will save me 30 seconds per scan, as I won't need to add all of the transaction metadata myself. Realistically, this would (a) make my review process slower, as LLMs are not yet capable of saying "I'm not sure" and I'd have to manually check each transaction at review time, (b) make my submit API endpoint slower since it takes relatively-forever for it to analyze images (or at least it did when I experimented with this on GPT4-turbo last year), and (c) drive my costs way up (this service costs almost nothing to run, as I run it within Lambda's free tier limit).



I'm an avg dev, I was never into LLMs/co-pilot etc mocking prompt engineering but... my current job is working with an LLM framework so idk... future proofs me I guess. I do like computer vision and ML on dataset eg. training hand writing IMU by gestures that's cool.

The embeddings I feel like there is something there even if it doesn't actually understand. My journey has just begun.

I scoff every time someone says "this + AI". AI is this thing they just throw in there. Last time I didn't want to work with some tech I quit my job was not a good move not being financially independent. Anyway yeah I'll keep digging into this. I still don't use co-pilot right now but I'm reading up more on the embedding stuff for cross training or some case like RAG.


I think there's a big selection bias on hackernews that you wouldn't get elsewhere. There's still "elite" software developers I see who really aren't into the whole LLM tooling space. I found use in the autocomplete and search workflows that the author mentioned but I stopped using these tools, out of curiosity for things were before. It turns out I don't need it to be productive and I too probably enjoy working more without it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: