Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My impression is that companies in most of the fields do not like to be regulated or scrutinized, so nothing new there.

While observing some people using LLMs, I realized that for a lot of people it really makes a huge difference in time saved. For me the difference is not significant, but I am generally solving complex problems, not writing nicely formatted reports where words and not numbers are relevant, so YMMV.



Is it good for one person (the writer) to save time, only for lots of other people (the readers) to have to do extra work to understand if the work is correct or hallucinated?


Is it good for one person (the writer) to ask a loaded question just to save some time on making their reasoning explicit, ony for lots of other people (the readers) to have to do extra work to understand what the argument is?


> Is it good for one person (the writer) to save time, only for lots of other people (the readers) to have to do extra work to understand if the work is correct or hallucinated?

This holds true whether an LLM/AI is used or not — see substantial portions of Fox News editorial content as an example (often kernels of truth with wildly speculative or creatively interpretive baggage).

In your example, a responsible writer who uses AI will check all content produced in order to ensure that it meets their standards.

Will there be irresponsible writers? Sure. There already are. AI makes it easier for them to be irresponsible, but that doesn’t really change the equation from the reader’s perspective.

I use AI daily in my work. I describe it as “AI augmentation”, but sometimes the AI is doing a lot of the time-consuming stuff. The time saved on relatively routine scut work is insane, and the quality of the end product (AI with my inputs and edits) is really good and consistent.


Anecdata, N=1; I recently used aider — a tool that gives LLMs access to specific files and git integration. The tools are great, but the LLMs are underwhelming, and I realized that — once in the flow — I am significantly faster at producing large, correct, and on-point pieces of code, whereas when I had to review LLM code, it was frustrating, it needed multiple attempts, and it frequently fell into loops.


anecdata n=1: LLMs lack understanding of context, stakeholder sensitivities and nuance in word usage, to write reports with the required depth and at the quality bar I need. Maybe it is faster at generating BS reports with no substance, but I can still write my reports much better and much faster than LLMs so far, probably because the reports are merely the artefact of solving a complex problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: