Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve long advocated that software engineers should read The Mythical Man-Month[0], but I believe it’s more important now than ever.

The last ~25 years or so have seen a drastic shift in how we build software, best trivialized by the shift from waterfall to agile.

With LLM-aided dev (Codex and Claude Code), I find myself going back to patterns that are closer to how we built software in the 70s/80s, than anything in my professional career (last ~15 years).

Some people are calling it “spec-driven development” but I find that title misleading.

Thinking about it as surgery is also misleading, though Fred Brooks’ analogy is still good.

For me, it feels like I’m finally able to spend time architecting the bridge/skyscraper/cathedral, without getting bogged down in terms of what bolts we’re using, where the steel come from, or which door hinges to use.

Those details matter, yes, but they’re the type of detail that I can delegate now; something that was far too expensive (and/or brittle) before.

[0]https://en.wikipedia.org/wiki/The_Mythical_Man-Month



There's not a lot that Brooks got wrong but the surgical team is it.

There's not a lot of team in a surgical team. Software does not need to be limited to one thing happening at a time the way open heart surgery does. There's room for more hands.

It's more like a sports team. But we don't practice, review, or coach the way they would, and so it takes a lot longer for us to reach excellence.


> There's not a lot of team in a surgical team

Surgeon, assistant, tech, anesthesiologist, plus probably a nurse...?


Have you ever seen or been to a serious surgery? The operating room is full of people, and those are some of the most gelled teams you'll find anywhere.


Brooks' idea is that the surgeon calls all the shots and everyone else is being orchestrated by that one person. There is only one person holding a knife at any point and that's generally the surgeon.

How many people are there besides the surgeon? About six? That's pretty much all one person can wrangle.

That's not enough for a large scale software project. We keep trying to make it work with Scrum teams but it is broken and we know it.

Most of the highest functioning projects I've been on have had a couple of 'surgeons'. And while they often work on separate 'patients', it's not always the case.

Aren't there some surgeries now where more than one surgeon is operating concurrently? All I can find is that there's an Insurance Code for it.


Can you go into a bit more detail on your perspective of the 70s/80s approach vs. today? I’m an analyst with a passion for engineering, but am not an engineer by trade. So honestly I am naive to how now would be different from the past.


My take is that 70/80's built programs from a set of blueprints of what was required. Where each programmer had a set of requirements, knew what were required, when it is needed to be completed by and the tools available to enable the next level in development. If someone lagged behind then the project halted until but the end result was solidity and a completed application. At least during that time other programmers could improve their work and document.

Meanwhile with agile, its always felt like a race to me. If you didn't complete your part then spend a whole week focusing on it while the others carry on with anticipation that the sprint will result in completion of the part required. Enabling for the next part they've built to be integrated.

Vibe coding offers this "make a text box write to a file" code generation and it does. However without any blueprints the code starts to crumble when you proceed to introduce middleware such as authentication.

It was never discussed on how authentication should authenticate because someone is already that far ahead in their work so you end up mashing more code together until it does work. It does and your product holds premise.

The we'll improve, document and fix later never comes because of the influx of feature requests leading it to bloat. Now bogged down in tech debt you then spend resources in wrangling the mess with senior engineers.

The senior engineers are now fixing the mess resulting in their experienced code not integrating with that of the juniors. The seniors now having to tidy that code leave behind the real code they were originally tasked in improving turning the whole codebase in to something that's a diabolical mess, but hey, it works.

Hardware is cheaper than refactoring so instead you then "scale" by throwing more hardware at it until it handles the pressure.

Someone then leaves and knowledge share is lost. Some exec promotes someone from the team who was crazy-sane enough to keep all in check to the senior role while skimping them on pay and are now handling the lost work, theirs and keeping the team in check.

The product starts to fail and new junior engineers are bought in with new naive wisdom, jumping up and down with the newest fancy library tech finally making the process complete causing it to repeat itself indefinitely.


> My take is that 70/80's built programs from a set of blueprints of what was required. Where each programmer had a set of requirements, knew what were required, when it is needed to be completed by and the tools available to enable the next level in development.

The thing is, you couldn't start dev until you had those blueprints. So that's a lot of time at the start of the project where development had to sit idle even if you had a good idea of what the architecture would at least be.

> If someone lagged behind then the project halted until but the end result was solidity and a completed application. At least during that time other programmers could improve their work and document.

No, you didn't get this. Apps that completed had bugs then too, whether in design, specification or implementation. It's why the waterfall paper basically said you'd have to built it twice either way, so you should try to accelerate building it the first time so you'd know what you messed up when you built it the second time.

Or as Mel Brooks, who wrote the Mythical Man-Month would say, "Build one to throw away; you will, anyhow."

Nor could programmers productively spend downtime simply document things, the documentation was supposed to have already been written by the time they were writing out punch cards. The "programming" had already been done, in principle, what remained was transcribing the processes and algorithms into COBOL or FORTRAN.

Startups are perfectly free to adopt the methods of the 70s if they wish, but they will be outcompeted and ground into dust in the process. Likewise, there is more to agile than Scrum (which is what you're describing with sprints), and it seems weird to describe the dread you'd get of blocking your team if it takes a week to do your part but act is if a week slip on the critical path in a waterfall effort is no big deal.

I mean, you're actually right that many (not all) waterfall-based teams treat it like it's no big deal, but that's the reason that waterfall projects were often disastrously over-time and over-budget. "We've already slipped 3 weeks to the right, what's another day?". Well, those add up... at least with agile you can more easily change the scope to fit the calendar, or adapt to changing market pressures, or more rapidly integrate learnings from user research.


I imagine The Mythical Man Month would have been much more entertaining if written by Mel Brooks ;-)


More the 70s than 80s; our company wrote software in the mid 80s more or less the same as we (my company) still does; in the 80s-90s we just did ship a v1.0 a lot later than we do now, simply because in those times ages were basically impossible. Especially software on cardridges made it so you had to ship with 'no bugs'. But no blueprints; we just started on an idea we had and worked until it was good enough to ship.


> But no blueprints; we just started on an idea we had and worked until it was good enough to ship.

Yes, exactly. A lot of UNIX and other very good software for computers back then came about this way too. No or minimal blueprints and a lot of iterative implementation & testing and reacting to what you see.

It's hard convincing people today that agile methods have been in use long before sprints were a thing.


We're now getting in a position where we have CAD for software, aka CASE, Computer Aided Software Engineering. You can focus on the design of the software, instead of spending hours typing out code.


The question is: is that an AI thing or just the domain of conventional devtools.


Going forward it will probably mostly be a combination of both. Similar to how CAD software is also using ML and AI models for simulations and other tools.


> bridge/skyscraper/cathedral

> Those details matter, yes, but they’re the type of detail that I can delegate now

No...

If you're building a skyscraper, there's no world where you can delegate where the steel or bolts come from. Or you'll at least need to care about what properties that exact steel has and guarantee every bit used on your project matches these constraints.

If you don't want to care about those, build residential houses with 1000x less constraints and can be rebuilt on a dime comparatively.

You might be thinking about interior decoration or floor arrangement ? Those were always a different matter left to the building owner to deal with.


In the world of construction there’s generally an owner, who then works with three groups: an architect, an engineer, and a general contractor.

Depending on what you’re building, you might start with an architect who brings on a preferred engineering firm, or a GC that brings on an architect, etc.

You’re right to question my bridge/bolt combo, as the bolts on a suspension bridge are certainly a key detail!

However, as a programmer, it feels like I used to spend way too much time doing the work of a subcontractor (electrical, plumbing, hvac, cement, etc.), unless I get lucky with a library that handles it for me (and that I trust).

Software creation, thus always felt like building a new cathedral, where I was both the architect and the stone mason, and everything in-between.

Now I can focus on the high-level, and contract out the minutia like a pre-fab bridge, quality American steel, and decorative hinges from Restoration Hardware, as long as they fit the requirements of the project.


I think you're taking a metaphor a bit too literally.


No, it's perfectly apt. One comment is stating that using LLMs allows them to gloss over the details. The responding comment is saying that glossing over details is not a great idea, actually. I think that statement holds up very well on both sides of the analogy. You can get away with glossing over certain details when building a little shed or a throwaway python script. If you're building a skyscraper or a full-fledged application being used in the real world by thousands or millions of people, those details being glossed over are the foundation of your entire architecture, will influence every other part of the decision-making process, and will cause everything to crumble if handled carelessly.


Look at my beautiful cathedral

Look at my cathedral of tailwind ui

I’m sure they put locks on the doors


> If you're building a skyscraper, there's no world where you can delegate where the steel or bolts come from.

ah yes, I'm sure the CEO of Walsh (https://www.walshgroup.com/ourexperience/building/highrisere...) picks each bolt themselves directly without delegation




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: