> While I’m certain that this technology is producing some productivity improvements, I’m still genuinely (and frustratingly) unsure just how much of an improvement it is actually creating.
I often wonder how much more productive I'd be if just a fraction the effort and money poured into LLMs was spent on better API documentation and conventional coding tools. A lot of the time, I'm resorting to using an AI because I can't get information on how the current API of some-thing works into my brain fast enough, because the docs are non existent, outdated, or scattered and hard to collate.
This is facts. All of this talk about putting agent skills directly into repos (as Markdown!) is maddening. "Where were LITERALLY ALL OF YOU whenever the topic of docs as code came up?"
This is doubly maddening with NotebookLMs. They are becoming single sources of knowledge for large domains, which is great (except you can't just read the sources, which is very "We will read the Bible to you" energy), but, in the past, this knowledge would've been all over SharePoint, Slack, Google Drive, Confluence, etc.
I've chose to embrace the silver lining where there is now business backing to prioritize all the devx/documentation work because it's easier to quantify the "value" because LLM sessions provide a much larger sample size than inconsistent new hire onboarding (which was also a one-time process, instead of per session).
I do think people are going way overboard with markdown though, and that'll be the new documentation debt. Needs to be relatively high level and pointers, not duplicate details; agents can parse code at scale much faster than humans.
Haha indeed. At work suddenly documentation and APIs are important, but it's all for/behind "skills". Before it was always "sure, that would be nice"...
I do welcome the improvements to doc and APIs this brings though!
Yeah, I joined a project a couple of months ago, felt completely lost.
Last week, a colleague finally added for Claude all the documentation I'd have needed on day one. Meanwhile, I'm addressing issues from the other direction, writing custom linters to make sure that Claude progressively fixes its messes.
But maybe not for long. When we get long-running AIs, the knowledge locked inside the AI's thinking might supplant docs once again. Like if you had an engineer working at your company for a long time and knowing everything. With all the problems that implies, of course.
That's the weird best thing about LLMs - there is finally incentive for projects to create documentation, CLIs, tests, and well organized modular codebases, since LLMs fall flat on their face without these things!
I feel like Google search results have gotten tremendously worse over the past 2 years too. It's almost like you have to use AI search to find anything useful now.
Which of course reduces traffic to sites and thus the incentives to create the content you're looking for in the first place :(
There’s many groups that “win” by making search results worse. It’s an ongoing battle between them, and if someone’s blaming solely Google for it, they’re way oversimplifying.
> I often wonder how much more productive I'd be if just a fraction the effort and money poured into LLMs was spent on better API documentation and conventional coding tools.
Probably negligible. It's not a problem you can solve by pouring more money in. Evidence: configuration file format. I've never seen programmers who enjoy writing YAML. And pure JSON (without comments) is simply not a format should be written by humans. But as far as I know even in the richest companies these formats are still common. And the bad thing they were supposed to replace, XML config, was popularized by rich companies too...!
Programmers don’t enjoy writing things they have no good understanding of, and no good way to ascertain or predict in advance, how exactly it will behave. That’s at least partly due to poor documentation. Good documentation gives you a reliable conceptual model and makes you confident about how to use a tool.
As someone who does broad activities, it supercharges a lot of things. Having a critical eye is required though. I estimate 40%-60% improvements on basic coding tasks.
And hilariously, the worst offenders are AI frameworks themselves. A couple months ago I was helping a client build out some "agentic" stuff and we switched from OpenAI Agents library to Agno. Agents is messy enough, like making inconsistent use of its own enums etc, but with Agno you can really feel that they are eating their own dog food. Plenty of times I literally could not find the API for some object, and of course their docs page pushes you toward chatting with their goddamn docs chatbot, which barfs up some outdated function signature for you.
Yeah I get this impression too. AI feels like it's papering over overwrought and badly designed frameworks, tech stacks with far too many things in them, and also the decline of people creating or advocating for really expressive languages.
Pragmatic sure, but we're building a tower of chairs here rather than building ladders like a real engineering field.
> better API documentation and conventional coding tools
Agreed, and it depends on the language I suppose. I'm a C++ developer and when you start working with templates even at a non-casual level, the compiler errors due to either genuine syntactic errors or 'seems correct but the standard doesn't support' can be infuriatingly obtuse. The LLM 'just knows' the standard (kind of, all 2k pages), and can figure out and fix most of those errors far faster than I can. In fact one of my preferred usages is to point Codex at my compiler output and get it to do nothing more than fix template errors.
Kotlin, for example, is much more in your face, in the IDE which does a correctness pass, before you even invoke the compiler (in the traditional sense) and the language spec is considerably leaner with less (no?) UB, unlike C++.
I agree. I think of AI as a search engine on steroids.
But I think it IS the best way to search for information, to be able to put a question in natural language. I'm always amazed just how exactly on-point the answer is.
I mean even the best of docs out there that have a great search bar like the Vue docs still only matches your search term and surfaces relevant topics.
Good is debatable. The docs I want point out the weird shit in the system. The AI docs I've read are all basically "the get user endpoint can be called with HTTP to get a user, given a valid auth token". Thanks, it would have been faster to read the code.
I dunno. I trained as a software engineer, pivoted to civil laborer. I just can't see a robot doing 90% of the stuff I do anytime soon. Same goes for plumber, electrician, ... even most mobile plant operations. As a supplement around the edges, sure. But replace? Not in the near term. And that's not even considering the safety certification moats around skilled labor roles.
It's one thing to use AI to touch up photos, but in the end, you probably still want photos that match your memories and good photography still has an element of taste and creativity.
> For the next eight hours, every developer who installed or updated Cline got OpenClaw - a separate AI agent with full system access - installed globally on their machine ...
Except those with ignore-scripts=true in their npm config ...
I guess it’s because I do C++ and robotics. But npm is just not part of my world. The only time I come across it is when someone gets real lazy and doesn’t ship a proper single exe distributable. Claude Code and Codex CLIs were both naughty on initial release. But are now a single file distributable the way the lord intended.
> Most abstractions in software exist because humans need help. We couldn't hold the whole system in our heads, so we built layers to manage the complexity for us.
Kind of a sloppy statement, but I don't think it's accurate to say abstraction or layering exists in software just because humans need help comprehending it. Abstractions often exist to capture the essence of some aspect of the real world, and to allow for software reuse. AIs will still find reusing software useful? Secondly, you equate "abstractions" with "layers" which aren't really the same thing. Layers are more about separation of concerns. Maybe it could be argued layering is a type of abstraction.
Right now I'm trying to get an AI (actually two ChatGPT and Grok) to write me a simple HomeAssistant integration that blinks a virtual light on and off driven by a random boolean virtual sensor. I just started using HomeAssistant and don't know it well. +2H and a few iterations in, still doesn't work. Winning.
HomeAssistant is probably doing too much for what you need. Imo it's not a good piece of software. https://nodered.org/ is maybe a better fit. Or just some plain old scripts.
It looks like the point is Home Assistant integration. I seriously doubt they need an led to be blinked on and off based on a mock sensor. That's either "for the integration test" or "as a placeholder for something more". Either way, the is failing.
Nah HA is defs what I want. I agree it's terrible software. All the more motivation for me to try throw AI at it. If the docs were better I'd just grind the docs instead it would probably be ahead, but the HA docs suck almost as bad as the code - which may have something to do with why the AIs are sucking now that I think about it ..
Once you have btrfs you don't really need rsync anymore, its snapshot + send/receive functionality are all you need for convenient and efficient backups, using a tool like btrbk.
I feel like apparmor is getting there, very, very slowly. Just need every package to come with a declarative profile or fallback to a strict default profile.
Not AI. Not bots. Not Indians or Pakistanis. Not Kremlin or Hasbara agents. All the above might comprise a small percentage of it, but the vast majority of the rage bait and rage bait support we’ve seen over the past year+ on the Internet (including here) is just westerners being (allowed and encouraged by each other to be) racist toward non-whites in various ways.
I often wonder how much more productive I'd be if just a fraction the effort and money poured into LLMs was spent on better API documentation and conventional coding tools. A lot of the time, I'm resorting to using an AI because I can't get information on how the current API of some-thing works into my brain fast enough, because the docs are non existent, outdated, or scattered and hard to collate.
reply