Hacker Newsnew | past | comments | ask | show | jobs | submit | theteapot's commentslogin

> While I’m certain that this technology is producing some productivity improvements, I’m still genuinely (and frustratingly) unsure just how much of an improvement it is actually creating.

I often wonder how much more productive I'd be if just a fraction the effort and money poured into LLMs was spent on better API documentation and conventional coding tools. A lot of the time, I'm resorting to using an AI because I can't get information on how the current API of some-thing works into my brain fast enough, because the docs are non existent, outdated, or scattered and hard to collate.


This is facts. All of this talk about putting agent skills directly into repos (as Markdown!) is maddening. "Where were LITERALLY ALL OF YOU whenever the topic of docs as code came up?"

This is doubly maddening with NotebookLMs. They are becoming single sources of knowledge for large domains, which is great (except you can't just read the sources, which is very "We will read the Bible to you" energy), but, in the past, this knowledge would've been all over SharePoint, Slack, Google Drive, Confluence, etc.


I've chose to embrace the silver lining where there is now business backing to prioritize all the devx/documentation work because it's easier to quantify the "value" because LLM sessions provide a much larger sample size than inconsistent new hire onboarding (which was also a one-time process, instead of per session).

I do think people are going way overboard with markdown though, and that'll be the new documentation debt. Needs to be relatively high level and pointers, not duplicate details; agents can parse code at scale much faster than humans.


Haha indeed. At work suddenly documentation and APIs are important, but it's all for/behind "skills". Before it was always "sure, that would be nice"...

I do welcome the improvements to doc and APIs this brings though!


My favorite thing is when some projects now have better documentation in their Claude skills or MCPs than they ever did for users.

Yeah, I joined a project a couple of months ago, felt completely lost.

Last week, a colleague finally added for Claude all the documentation I'd have needed on day one. Meanwhile, I'm addressing issues from the other direction, writing custom linters to make sure that Claude progressively fixes its messes.


There is natural incentive for engineers working on a project to keep Claude skills up to date. I cannot say the same for general documentation.

But maybe not for long. When we get long-running AIs, the knowledge locked inside the AI's thinking might supplant docs once again. Like if you had an engineer working at your company for a long time and knowing everything. With all the problems that implies, of course.

That's the weird best thing about LLMs - there is finally incentive for projects to create documentation, CLIs, tests, and well organized modular codebases, since LLMs fall flat on their face without these things!

But that documentation itself is likely AI-generated

At least it saves me from having to generate the docs myself!

Why continue involvement with a project that clearly devalues their “customers” or “users” who care about documentation?

Projects that spend time on documentation for my robots have shown me they care about my use case!

I feel like Google search results have gotten tremendously worse over the past 2 years too. It's almost like you have to use AI search to find anything useful now.

Which of course reduces traffic to sites and thus the incentives to create the content you're looking for in the first place :(


I totally agree with you, this reduces the traffic to sites but also there were lots of website that information wasn't true or correct.

There’s many groups that “win” by making search results worse. It’s an ongoing battle between them, and if someone’s blaming solely Google for it, they’re way oversimplifying.

Does anyone know which tool can be best used instead of Google for "classic", non-AI googling?

Kagi

> I often wonder how much more productive I'd be if just a fraction the effort and money poured into LLMs was spent on better API documentation and conventional coding tools.

Probably negligible. It's not a problem you can solve by pouring more money in. Evidence: configuration file format. I've never seen programmers who enjoy writing YAML. And pure JSON (without comments) is simply not a format should be written by humans. But as far as I know even in the richest companies these formats are still common. And the bad thing they were supposed to replace, XML config, was popularized by rich companies too...!


Programmers don’t enjoy writing things they have no good understanding of, and no good way to ascertain or predict in advance, how exactly it will behave. That’s at least partly due to poor documentation. Good documentation gives you a reliable conceptual model and makes you confident about how to use a tool.

I love YAML, so there is at least one weirdo out there on the internet who is bitter that TOML and JSON won

As a TOML and JSON fan I must say those formats definitely didn't win :). YAML did, by a really long shot too unfortunately

JSON is not designed as a configuration file format.

As someone who does broad activities, it supercharges a lot of things. Having a critical eye is required though. I estimate 40%-60% improvements on basic coding tasks.

I don't bring huge codebases to it.


I can't speak for your efficiency, but for me it's now often easier to create a tool than find if one exists and learn how to use it.

I was able to one-shot a parameterized SVG template creator for a laser cutter. Unlikely I could have achieved the same with 40 hours of pure focus.


And hilariously, the worst offenders are AI frameworks themselves. A couple months ago I was helping a client build out some "agentic" stuff and we switched from OpenAI Agents library to Agno. Agents is messy enough, like making inconsistent use of its own enums etc, but with Agno you can really feel that they are eating their own dog food. Plenty of times I literally could not find the API for some object, and of course their docs page pushes you toward chatting with their goddamn docs chatbot, which barfs up some outdated function signature for you.

Yeah I get this impression too. AI feels like it's papering over overwrought and badly designed frameworks, tech stacks with far too many things in them, and also the decline of people creating or advocating for really expressive languages.

Pragmatic sure, but we're building a tower of chairs here rather than building ladders like a real engineering field.


Can’t you make the LLM write API documentation?

> better API documentation and conventional coding tools

Agreed, and it depends on the language I suppose. I'm a C++ developer and when you start working with templates even at a non-casual level, the compiler errors due to either genuine syntactic errors or 'seems correct but the standard doesn't support' can be infuriatingly obtuse. The LLM 'just knows' the standard (kind of, all 2k pages), and can figure out and fix most of those errors far faster than I can. In fact one of my preferred usages is to point Codex at my compiler output and get it to do nothing more than fix template errors.

Kotlin, for example, is much more in your face, in the IDE which does a correctness pass, before you even invoke the compiler (in the traditional sense) and the language spec is considerably leaner with less (no?) UB, unlike C++.


Depends on which C++ we are talking about.

You can have the Kotlin experience with a mix of static asserts, constexpr and concepts.

C++ IDEs also offer many goodies which those that insist in using vi and emacs keep missing out.


I agree. I think of AI as a search engine on steroids.

But I think it IS the best way to search for information, to be able to put a question in natural language. I'm always amazed just how exactly on-point the answer is.

I mean even the best of docs out there that have a great search bar like the Vue docs still only matches your search term and surfaces relevant topics.


then you should be delighted we have LLMs one of the use cases they are best suited to is writing documentation, much better than humans can.

Good is debatable. The docs I want point out the weird shit in the system. The AI docs I've read are all basically "the get user endpoint can be called with HTTP to get a user, given a valid auth token". Thanks, it would have been faster to read the code.

They write good _looking_ documentation. How good those docs are is entirely on the person/people who prompted them into existence.

Please don't inflict LLM docs on people

Why is this interesting?

The LLM content piracy to isomorphic plagiarism business loop is unsustainable. Yet for context search it is reasonably useful. =3

https://www.youtube.com/watch?v=T4Upf_B9RLQ


I dunno. I trained as a software engineer, pivoted to civil laborer. I just can't see a robot doing 90% of the stuff I do anytime soon. Same goes for plumber, electrician, ... even most mobile plant operations. As a supplement around the edges, sure. But replace? Not in the near term. And that's not even considering the safety certification moats around skilled labor roles.

I'm think event photography is another.

It's one thing to use AI to touch up photos, but in the end, you probably still want photos that match your memories and good photography still has an element of taste and creativity.


Yeah I think with all the AI slop around, people are going to value 'real' a lot more.

> For the next eight hours, every developer who installed or updated Cline got OpenClaw - a separate AI agent with full system access - installed globally on their machine ...

Except those with ignore-scripts=true in their npm config ...


Or those who use pnpm


I’ll do you one better. I refuse to install npm or anything like npm. Keep that bloated garbage off my machine plz.

I guaranteed way for me to NOT try a piece of software is if the first setup step is “npm install…”


Sure, but throwing the baby out with the bathwater tends to not be a solution that people will find clever or reasonable.


I guess it’s because I do C++ and robotics. But npm is just not part of my world. The only time I come across it is when someone gets real lazy and doesn’t ship a proper single exe distributable. Claude Code and Codex CLIs were both naughty on initial release. But are now a single file distributable the way the lord intended.


> Most abstractions in software exist because humans need help. We couldn't hold the whole system in our heads, so we built layers to manage the complexity for us.

Kind of a sloppy statement, but I don't think it's accurate to say abstraction or layering exists in software just because humans need help comprehending it. Abstractions often exist to capture the essence of some aspect of the real world, and to allow for software reuse. AIs will still find reusing software useful? Secondly, you equate "abstractions" with "layers" which aren't really the same thing. Layers are more about separation of concerns. Maybe it could be argued layering is a type of abstraction.


Right now I'm trying to get an AI (actually two ChatGPT and Grok) to write me a simple HomeAssistant integration that blinks a virtual light on and off driven by a random boolean virtual sensor. I just started using HomeAssistant and don't know it well. +2H and a few iterations in, still doesn't work. Winning.


Forget both of them and throw everything at my boi Opus 4.6.


HomeAssistant is probably doing too much for what you need. Imo it's not a good piece of software. https://nodered.org/ is maybe a better fit. Or just some plain old scripts.


It looks like the point is Home Assistant integration. I seriously doubt they need an led to be blinked on and off based on a mock sensor. That's either "for the integration test" or "as a placeholder for something more". Either way, the is failing.


Nah HA is defs what I want. I agree it's terrible software. All the more motivation for me to try throw AI at it. If the docs were better I'd just grind the docs instead it would probably be ahead, but the HA docs suck almost as bad as the code - which may have something to do with why the AIs are sucking now that I think about it ..


Need to combine this with LVM or BTRFS or similar to be a true snapshot. Rsnapshot supports LVM snapshot pretty good.


Once you have btrfs you don't really need rsync anymore, its snapshot + send/receive functionality are all you need for convenient and efficient backups, using a tool like btrbk.


I feel like apparmor is getting there, very, very slowly. Just need every package to come with a declarative profile or fallback to a strict default profile.


And most CS grads forget all that after a few years because it's not relevant to what they're actually doing.


I'm noticing more of these race baiting comments on YC too lately. AI?


No, that’s a common cope.

Not AI. Not bots. Not Indians or Pakistanis. Not Kremlin or Hasbara agents. All the above might comprise a small percentage of it, but the vast majority of the rage bait and rage bait support we’ve seen over the past year+ on the Internet (including here) is just westerners being (allowed and encouraged by each other to be) racist toward non-whites in various ways.


How's your day been?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: