Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That is a very interesting point about how little use of AI most of us making day to day, despite the potential utility that seems to be lurking. I think it just takes time for people and economies to adapt to new technology.

Even if technological progress on AI were to stop today, and the best models that exist in 2030 are the same models we have now, there would still be years of social and economic change as people and companies figure out how to make use of novel technology.



Unless I'm doing something simple like writing out some basic shell script or python program, it's often easier to just do something myself than take the time to explain what I want to an LLM. There's something to be said about taking the time to formulate your plan in clear steps ahead of time, but for many problems it just doesn't feel like it's worth the time to write it all out.


I find that if a problem doesn't require planning it's probably simple enough that the LLM can handle it with little input. if it does require planning, I might as well dump it into an LLM as another evaluator and then to drive the implementation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: