I've faced the same but my conclusion is the opposite.
In the past 6 months, all my code has been written by claude code and gemini cli. I have written code backend, frontend, infrastructure and iOS. Considering my career trajectory all of this was impossible a couple of years ago.
But the technical debt has been enormous. And I'll be honest, my understanding of these technologies hasn't been 'expert' level. I'm 100% sure any experienced dev could go through my code and may think it's a load of crap requiring serious re-architecture.
It works (that's great!) but the 'software engineering' side of things is still subpar.
A lot of people aren’t realizing that it’s not about replacing software engineers, it’s about replacing software.
We’ve been trying to build well engineered, robust, scalable systems because software had to be written to serve other users.
But LLMs change that. I have a bunch of vibe coded command lines tools that exactly solve my problems, but very likely would make terrible software. The thing is, this program only needs to run on my machine the way I like to use it.
In a growing class of cases bespoke tools are superior to generalized software. This historically was not the case because it took too much time and energy to maintain these things. But today if my vibe coded solution breaks, I can rebuild it almost instantly (because I understand the architecture). It takes less time today to build a bespoke tool that solved your problem than it does to learn how to use existing software.
There’s still plenty of software that cannot be replaced with bespoke tools, but that list is shrinking.
This is the thing a lot of skeptics aren't grappling with. Software engineering as a profession is mostly about building software that can operate at scale. If you remove scale from the equation then you can remove a massive chunk of the complexity required to build useful software.
There are a ton of recipe management apps out there, and all of them are more complex than I really need. They have to be, because other people looking for recipe management software have different needs and priorities. So I just vibe coded my own recipe management app in an afternoon that does exactly what I want and nothing more. I'm sure it would crash and burn if I ever tried to launch it at scale, but I don't have to care about that.
If I was in the SaaS business I would be extremely worried about the democratization of bespoke software.
Tools for the non-professional developer to put their skills on wheels have always been part of the equation since we've had microcomputers if not minicomputer, see
But they’ve always basically required that you essentially become a programmer at the end of the day in order to get those benefits. The spreadsheet is probably the largest intruder in this ecosystem, but that’s only the case. If you don’t think that operating a spreadsheet is programming. It is.
What people are describing is that Normies can now do the kinds of things that only wizards with PERL could do in the 90s. The sorts of things that were always technically possible with computers if you were a very specific kind of person are now possible with computers for everyone else.
Languages like BASIC and Python have always been useful to people for whom programming is a part-time thing. Sure you have to learn something but it is not like learning assembly or C++.
On the other hand, it is notorious that people who don't know anything about programming can accomplish a little bit with LLM tools and then they get stuck.
It's part of what is so irksome about the slop blog posts about AI coding that HN is saturated with now. If you've accomplished something with AI coding it is because of: (1) your familiarity with the domain you're working in and (2) your general knowledge about how programming environments work. With (1) and (2) you can recognize the different between a real solution and a false solution and close the gap when something "almost works". Without it, you're going to go around in circles at best. People are blogging as if their experience with prompting or their unscientific experiments about this model and that model were valuable but they're not, (1) and (2) are valuable, anything specific about AI coding 2026-02-18 will be half-obsolete on 2026-02-19; so of course they face indifference.
I think even BASIC and Python don’t get out of “programming”. Nether did SQL. They’re friendlier interfaces to programming but the real barrier is still understanding the model of computation PLUS understanding the quirks of the language (often quite hard to separate for a newbie!). I think professional programmers think that Python or JS is somehow magically more accessible because it’s not something nasty like C++, but that’s not really a widely shared or easily justified opinion.
Also who cares if someone gets going with an LLM and gets stuck? Not like that’s new! GitHub is littered with projects made by real programmers that got stuck well before any real functionality. The advantage of getting stuck with a frontier code agent is you can get unstuck. But again, who cares?! It’s not like folks who could program were really famous for extending grace and knowledge to those who couldn’t, so it’s unlikely some rando getting stuck is something that impacts you.
I don’t know what slop blog stuff you’re talking about. I think you should take some time to read people who have made this stuff work; it’s less magic than you might think, just hard work.
The basic skill behind programming is thinking systematically. That's different from, say, knowing what exactly IEEE floats are or how to win arguments with the borrow checker in Rust. Languages like Python and BASIC really do enable the non-professional programmer who can do simple things and not have to take classes on data structures and algorithms, compilers and stuff.
People who get stuck fail to realize their goals, waste their time, and will eventually give up on using these tools.
But seriously, think about, people had basically the same brains 20,000 years ago and there were dyslexic people back then too but it didn't matter because there wasn't anything to read. Today computers reward the ability to think and punish reacting to vibes yet natural selection is a slow process.
This is the common pitch, right down to recommending CP Snow.
It’s also horse-apples. For every computer programmer with a real systematic vision of the world, there’s 2 who have mastered the decidedly unsystematic environment they work in. This is because lots of business problems depend on knowing how IEEE floats work and arguing with eg the borrow checker in rust. Perhaps more than depend on systematics. Either way, a lot.
Even if we accept that real programming is systematic/logical and not about adapting to an environment, it sure as hell doesn’t present itself that way to users! The entire history of computing is serious engineers being frustrated that the machines they work with don’t allow them to speak in a language they consider logical and elegant. Even the example “non-professional” programming languages (or programming languages suitable for non-professional programmers) arose out of intentional design toward user adoption. I’m not saying that made them alike to agents. I’m saying that it’s REAL CLEAR that the coupling between what the user needs to do and the orderly logic of computation is fuzzy at best.
Can you explain this appearance of Osage oranges to me? (Sounds like a meme I'm not familiar with?) Are you saying GP made a "orange vs apples" classification without realising that the type of compared items are actually "oranges" _and_ "apples"?
A lot of people don’t care about software other than the fact that the ones they use work well. They don’t want to create it, to maintain it, or to upgrade it. That’s what the IT department is for.
This seems like a big HN / VC bubble thing thinking that average people are interested in software at all... they really aren't.
People want to open Netflix / YT / TikTok, open instagram, scroll reddit, take pictures, order stuff online, etc. Then professionals in fields want to read / write emails, open drawings, CADs, do tax returns, etc.
If anything overall interest in software seems to be going down for the average person compared to 2010s. I feel like most of the above normal people are going to stop using in favor of LLMs. LLMs certainly do compete with Googling for regular people though and writing emails.
I absolutely believe in that value proposition - but I've heard a lot about how beneficial it will be for large organizationally backed software products. If it isn't valuable to that later scenario (which I have uncertainty about) then there is no way companies like OpenAI could ever justify their valuations.
> I've heard a lot about how beneficial it will be for large organizationally backed software products
It's a generic interface to anything, which allows people to communicate in their own way, and the LLM is pretty good at figuring it out. For non-technical people or customers who don't fully understand the product, it's going to be very helpful. RIP outsourced call centers, we won't miss you.
Manual search and navigation might be on the chopping block soon. Knowing how to navigate big software is often a bespoke skill. Now you can just talk to the computer and tell it what you're trying to do. Al down in the shoe dept doesn't have to figure out how to right click or what a context menu is. It's a fundamental UI change.
> there is no way companies like OpenAI could ever justify their valuations
The value proposition isn't really "we'll help you write all the code for your company" it's a world where the average user's computer is a dumb terminal that opens up to a ChatGPT interface.
I didn't initially understand the value prop but have increasingly come to see it. The gamble is that LLMs will be your interface to everything the same way HTTP was for the last 20 years.
The mid-90s had a similar mix of deep skepticism and hype-driven madness (and if you read my comments you'll see I've historically been much closer to the skeptic side, despite a lot of experience in this space). But even in the 90s the hyped-up bubble riders didn't really see the idea that http would be how everything happens. We've literally hacked a document format and document serving protocol to build the entire global application infrastructure.
We saw a similar transformation with mobile devices where most of your world lives on a phone and the phone maker gets a nice piece of that revenue.
People thought Zuck was insane for his metaverse obsession, but what he was chasing was that next platform. He was wrong of course, but what his hope was was that VR would be the way people did everything.
Now this is what the LLM providers are really after. Claude/ChatGPT/Grok will be your world. You won't have to buy SaaS subscriptions for most things because you can just build it yourself. Why use Hubspot when you can just have AI do all your marketting, then you just need Hubspot for their message sending infrastructure. Why pay for a budgeting app when you can just build a custom one that lives on OpenAIs server (today your computer, but tomorrow theirs). Companies like banks will maintain interfaces to LLMs but you won't be doing your banking in their web app. Even social media will ultimately be replaced by an endless stream of bespoke images video and content made just for you (and of course it will be much easier to inject advertising into this space you don't even recognize as advertising).
The value prop is that these large, well funded, AI companies will just eat large chunks of industry.
Similar experience for me. I've been using it to make Qt GUIs, something I always avoided in the past because it seemed like a whole lot of stuff to learn when I could just make a TUI or use Tkinter if I really needed a GUI for some reason.
Claude Code is producing working useful GUIs for me using Qt via pyside6. They work well but I have no doubt that a dev with real experience with Qt would shudder. Nonetheless, because it does work, I am content to accept that this code isn't meant to be maintained by people so I don't really care if it's ugly.
We switched to solar in 2021 expecting a 3.5-year payback. Electricity prices rose so fast that we recovered the investment in under two years.
Also the national grid is notorious for it's frequent blackouts (load-shedding) since the early ’90s. Solar allowed us to have uninterrupted supply in the mornings and longer backups during night.
We got roof top solar 1.5 years ago in Canada. Payoff will be 6-7 years, but we got an interest free loan to cover it.
So we’ll just pay what we would have for power for those years ~$1000 a year, then we’ll have free power for 20 more, saving something like $20,000 for $0 investment.
Payback - I never factored that in or even thought about that.
I was more concerned about having reliable power and reducing my electricity bill.
The daily 2-hour power cuts were getting out of hand, and I was running my business from my home office, so the tax incentives helped slightly.
The grid is more stable now as new power units became available but a big chunk of middle-class consumers using solar are using way less power now, so the local town councils are having problems balancing their books (town councils re-sells electricity from the national operator).
All these coding agent workflows really drive home how important a solid test suite is but who’s actually writing the tests? In my case Claude Code keeps missing key scenarios and I’ve had to point out every gap myself.
Also reviewing LLM generated code is way more mentally draining and that’s with just one agent. I can’t imagine trying to review code from multiple agents working in parallel.
Finally I’ve shipped a bunch of features to prod with Claude writing all the code. Productivity definitely went up, but my understanding of the codebase dropped fast. Three months in I’m actually struggling to write good prompts or give accurate estimates because my grasp of the code hasn’t really grown (code reviews only help so much).
Curious if anyone else has run into the same thing?
Can anyone explain why tiling managers are useful? Seems like a waste of space to me. I prefer having my various windows all over the place and just alt-tabbing between (or using other means of opening the right app). I highly prefer having the app I am working on to be in the center of the screen, so that is what makes sense for me.
They're useful to let you not have to think about positioning windows with precision.
If that doesn't feel useful to you, then maybe a tiling wm isn't right for you. That's entirely fine.
My wm has an "escape" in that I can define floating desktops, and by default I have one, mostly used for file management, because there are things where I agree it's better to have floating/overlapping windows.
It doesn't really matter if it's a "waste of space" - I have two large monitors, and 10 virtual desktops to spread windows between (I'd add more, but I haven't felt the need). To the point where my setup, by default, centers the window with large margins when I have just one window open on a screen because it's more comfortable (and I'm just one keypress away from fullscreening the app anyway).
Most of the time I use tiling because I like not having to care about the layout beyond those defaults.
But I can also configure specific layouts. E.g. I have desktop dedicated to my todo list, a list of done items, and notes, and it has a fixed layout that ensures those windows are always in the same placement, on the same desktop.
Maybe it's also due to differences in personality. I like to focus on one or two things at a time. And on second though, my argument about wasting space probably doesn't make sense. Perhaps I'm thinking more about "information overload".
In my day to day I have a couple terminals (each with 4-5 tabs, some are running screen sessions), two browsers (with max 3-4 tabs open), music player, at least 2-3 IDE's open (JetBrains), Notes, mail client, Slack. This is across two monitors.
If you've tried it and it doesn't fit, that seems fine, it's all just personal workflow.
For me it's pure speed at getting to where I need to go. My notes live on workspace 1, my main workspace on 2 and browser on 3, so I'm just a single key combination away from most of what I need. Can still alt+tab if I like.
My laptop has a small 11.1'' screen, so using a traditional desktop with smaller windows is not practical for me. Plus, not having the windows overlap with each other by default gives a more structured workflow.
unrelated to the comment: I wrote this answer 3 times, but the damn process killer on Android kept deleting it so I had to re-write it each time. if I sound mad, it is because I am.
+1 for AeroSpace. I have been i3/sway user for about a decade and was really missing similar experience on Mac. All the previous ones came with gotchas or were nowhere near the usability of i3/sway.
AeroSpace is almost there, it has a few quirks and annoyances and is missing some features but it's close enough that it's now my daily driver for client work requiring being on a Mac.
Highly recommended and hopefully it continues to evolve. Also hoping that the churn in macOS is not going to kill it for some unexpected reason.
As far as Hyprland, I tried it and when I got to a setup that I felt comfortable I realized that I had basically replicated my Sway setup with no added benefits so I just switched back to Sway. I'm not much into ricing anyway. I mostly want the chrome to stay out of my way.
That said, for people unfamiliar with i3/Sway, Hyprland could be a great way of getting into tiling so I'm definitely rooting for them!
As a "Linux at home - macOS at work" user, I can't stand Window management on macOS. It is so inefficient and hard to manage.
AeroSpace tiling window manager finally changed my work day from constant struggle to only occasional struggle with macOS. I use it with sketchybar which is not yet perfect, but it could become quite good and performant with some work on it.
I currently use Gnome on Linux, not a tiling window manager, and I would gladly go with Hyprland, but about a year ago while I was trying it out, it crashed quite a few times so it clearly was not ready for production use back then. Will give it a shot again.
Finally someone emphasized upon the cognitive load of AI-assisted coding.
It definitely makes me faster but it's consistent prompt->code-review->prompt->code-review->scratch->prompt-code->review cycle which just requires extreme focus.
We setup a solar on in our home in Pakistan back in 2023. Our primary driver was cost since it was getting extremely expensive during summers when the ACs were on. We expected to get the return in 4 years but inflation went so high in subsequent years that it took just 2 years.
I remember back in the summer of 1996 in Pakistan our household was one of the first few to have to internet.
At that time angelfire.com used to give free webspace. My brother got hold of a pirated version of CorelDraw and setup a fan website of his favorite rock band Junoon, which incidentally is still online: https://www.angelfire.com/pa/JUNOON
And then when my brother met the band at a concert and they actually recognized him due to the website. I guess first time we realized how impactful internet is going to be.
I love your brother's site, so much. It looks like the web counter is still working, and that I'm not the only person from here checking it out, so I hope angelfire is ready for a bit of a "hug".
How are the odds for bringing your remote employees to US using the L1 visa? They've worked for us for more than 5 years in management role.
Also the startup got acquired sometime ago, does that impact the L1 application?
If these employees manage other employees now and will manage other employees in the U.S., then the odds are high actually. Conversely, if these employees aren't managing any employees and won't be managing any employees in the U.S., then, unless their work is highly complex and technical, the odds are low. Does the acquiring company also own the company abroad?
In the past 6 months, all my code has been written by claude code and gemini cli. I have written code backend, frontend, infrastructure and iOS. Considering my career trajectory all of this was impossible a couple of years ago.
But the technical debt has been enormous. And I'll be honest, my understanding of these technologies hasn't been 'expert' level. I'm 100% sure any experienced dev could go through my code and may think it's a load of crap requiring serious re-architecture.
It works (that's great!) but the 'software engineering' side of things is still subpar.
reply