This is more of a reflection of how our profession has not meaningfully advanced. OP talks about boilerplate. You talk about grunt work. We now have AI to do these things for us. But why do such things need to exist in the first place? Why hasn't there been a minimal-boilerplate language and framework and programming environment? Why haven't we collectively emphasized the creation of new tools to reduce boilerplate and grunt work?
This is the glaring fallacy! We are turning to unreliable stochastic agents to churn out boilerplate and do toil that should just be abstracted or automated away by fully deterministic, reliably correct programs. This is, prima facie, a degenerative and wasteful way to develop software.
Saying boilerplate shouldn’t exist is like saying we shouldn’t need nails or screws if we just designed furniture to be cut perfectly as one piece from the tree. The response is “I mean, sure, that’d be great, not sure how you’ll actually accomplish that though”.
Great analogy. We've attempted to produce these systems and every time what emerges is software which makes easy things easy and hard things impossible.
Reason Japanese carpenters do or did that is that sea air + high humidity would absolutely rot anything with nail and screw.
No furniture is really designed from a single tree, though. They aren't massive enough.
I agree with overall sentiment. But the analogy is higly flawed. You can't compare physical things with software. Physical things are way more constrained while software is super abstract.
I can and will compare them, analogies don’t need to be perfect so long as they get a point across. That’s why they’re analogies, not direct perfect comparisons.
I very much enjoy the Japanese carpentry styles that exist though, off topic but very cool.
I can tell you about 1000 ways, the problem is there are no corporate monetary incentives to follow them, and not much late-90s-era FOSS ethos going around either...
This is a terribly confused analogy, afaict. But maybe if you could explain in what sense boilerplate, as defined in https://en.wikipedia.org/wiki/Boilerplate_text, is anything like a nail, it could be less confusing.
Saying boilerplate should exist is like saying every nail should have its own hammer.
Some amount of boilerplate probably needs to exist, but in general it would be better off minimized. For a decade or so there's sadly been a trend of deliberately increasing it.
While it sounds likely true for the US, it's the opposite in Germany:
likely due to societal expectations on "creature comforts" and German homes not being framed with 2x4's but instead getting guild-approved craftsmen to construct a roof for a brick building (with often precast concrete slabs forming the intermediate floors; they're segmented along the non-bridging direction to be less customized).
We’re limited by the limits of our invention though. We can’t set the parameters and features to whatever we want, or we’d set them to “infinitely powerful” and “infinitely simple” - it doesn’t work like that however.
Well, depending on the value proposition, or the required goals, that’s not necessarily true. There are pros and cons to different approaches, and pretending there aren’t downsides to such a switch is problematic.
Yes and its why AI fills me with impending doom: handing over the reigns to an AI that can deal with the bullshit for us means we will get stuck in a groundhog day scenario of waking up with the same shitty architecture for the foreseeable future. Automation is the opposite of plasticity.
Maybe if you fully hand over the reigns and go watch Youtube all day.
LLMs allow us to do large but cheap experiments that we would never attempt otherwise. That includes new architectures. Automation in the traditional sense is opposite of plasticity (because it's optimizing and crystalizing around a very specific process), but what we're doing with LLMs isn't that. Every new request can be different. Experiments are more possible, not less. We don't have to tear down years of scaffolding like old automated systems. We just nudge it in a new direction.
I don’t think that will happen. It’s more like a 3d printer where you can feed in a new architecture and new design every day and it will create it. More flexibility instead of less.
Ground Hog day is optimistic, I think. It will be like "The Butterfly Effect": every attempt to fix the systems using the same dumb, wrote solutions will make the next iteration of the architecture worse and more shitty.
When humans are in the loop everything pretty much becomes stochastic as well. What matters more is the error rate and result correctness. I think this shifts the focus towards test cases, measurement, and outcome.
A few days ago I lost some data including recent code changes. Today I'm trying to recreate the same code changes - i.e. work I've just recently worked through - and for the life of me I can't get it to work the same way again. Even though "just" that is what I set out to do in the first place - no improvements, just to do the same thing over again.
Everything we do is a stochastic process. If you throw a dart 100 times at a target, it's not going to land at the same spot every time. There is a great deal of uncertainty and non-deterministic behavior in our everyday actions.
> throw a dart ... great deal of uncertainty and nongdeterministic behavior in our everyday actions.
Throwing a dart could not be further away from programming a computer. It's one of the most deterministic things we can do. If I write if(n>0) then the computer will execute my intent with 100% accuracy. It won't compare n to 0.005.
You see arguments like yours a lot. It seems to be a way of saying "let's lower the bar for AI". But suppose I have a laser guided rifle that I rely on for my food and someone comes along with a bow and arrow and says "give it a chance, after all lots of things we do are inaccurate, like throwing darts for example". What would you answer?
As much as it’s true that there’s stochasticity involved in just about everything that we do, I’m not sure that that’s equivalent to everything we do being a stochastic process. With your dart example, a very significant amount of the stochasticity involved in the determination of where the dart lands is external to the human thrower. An expert human thrower could easily make it appear deterministic.
If we are talking in terms of IRL/physics, there is no such thing as a deterministic system outside of theory - everything is stochastic to differing degrees - including you brain that came up with these thoughts.
I think that both of you are right to some extent.
It’s undeniable that humans exhibit stochastic traits, but we’re obviously not stochastic processes in the same sense as LLMs and the like. We have agency, error-correction, and learning mechanisms that make us far more reliable.
In practice, humans (especially experts) have an apparent determinism despite all of the randomness involved (both internally and externally) in many of our actions.
stochastic vs deterministic is arguable a property of modelling, not reality.
Something so complex that we cannot model it as deterministic is hence stochastic. We can just as easily model a stochastic thing by ignoring the stochastic parts.
separating subjective appearance of things from how we can conceptualise them as models begs a deeper philosophical question of how you can talk about the nature of things you cannot perceive.
Not interested in joining a pile-on, but I just wanted to point out how difficult reproducible builds are. I think there's still a bit of unpredictability in there, unless we go to extraordinary lengths (see also: software proofs).
This is very true. For the most basic approaches of using stochastic agents for this purpose, especially with genralized agents and approaches.
It is possible to get much higher quality with not just oversight, but creating the alignment from the stochastic agents to have no choice but to converge towards the desired vector of work reliably.
Human in the loop AI is fine, I'm not sure that everything doesn't to be automated, it's entirely possible to get further and more reps in on a problem with the tool as long as the human is the driver and using the stochastic agent as a thinking partner and not the other way around.
How big a dent do you think we could make if poured $252 billion dollars[0] just into paying down all our towers of tech debt and developing clean abstractions for all these known problems?
nothing prevents stochastic agents from producing reliable, deterministic and correct programs. it's literally what the agents are designed for. it's much less wasteful than me doing the same work and much much less wasteful trying to find a framework for all frameworks.
Reduced mental load. When it’s proven that a set of input will always result in the same output, you don’t have to verify the output. And you can just chain process together and not having to worry about time wasted because of deviations.
Good point. Non-determinism is not fundamentally problematic on many levels. What is important is that the essential behavioral invariants of the systems are maintained.
My take: money. Years ago, when I was cutting my teeth in software, efficiency was a real concern. Not just efficiency for limited CPU, memory, and storage. But also how you could maximize the output of smaller head count of developers. There was a lot of debate over which methodologies, languages, etc, gave the biggest bang for buck.
And then… that just kind of dropped out of the discussion. Throw things at the wall as fast as possible and see what stuck, deal with the consequences later. And to be fair, there were studies showing that choice of language didn’t actually make as big of difference as found in the emotions behind the debates. And then the web… committee designed over years and years, with the neve the ability to start over. And lots of money meant that we needed lots of manager roles too. And managers elevate their status by having more people. And more people means more opportunity for specializations. It all becomes an unabated positive feedback loop.
I love that it’s meant my salary has steadily climbed over the years, but I’ve actually secretly thought it would be nice if there was bit of a collapse in the field, just so we could get back to solid basics again. But… not if I have to take a big pay cut. :)
Many of the languages that allow people to quickly develop software end up with their own tradeoffs. Some of them have unusual syntax, at least in part of the language. Many of them allow duck typing, which many consider a major detriment to production reliability. Some of them are only interpreted. Some of them have a syntax people just don’t like. Some of them are just really big languages with lots of features, because getting rid of the boilerplate often means more builtins or a bigger standard library. Some of them either the runtime or the build time leaves a lot to be desired.
Here’s an incomplete list for those traits. For unusual, there’s many of the FP languages, Ada, APL, Delphi/Object Pascal, JS, and Perl. For duck typing, there’s Ruby, Python, PHP, JS, and Perl. For only interpreted, there are Ruby, PHP, and Perl (and formerly for some time Python and JS). For syntax that’s not necessarily odd (but may be) but lots of people find distasteful there’s Perl, any form of Lisp, APL, Haskell, the ML family, Fortran, JS, and in some camps Python, PHP, Ruby, Go, or anything from the Pascal family. For big languages with lots of interacting parts there’s Perl, Ada, PHP, Lisp with CLOS, Julia, and PHP. For slowdowns, there’s Julia, Python, PHP, and Ruby. The runtime for Perl is actually pretty fast once it’s up and running, but having to build the app before running it on every invocation makes for a slow start time.
All that said, certain orgs do impressive projects pretty quickly with some of these languages. Some do impressively quick work with even less popular languages like Pike, Ponie, Elixir, Vala, AppScript, Forth, IPL, Factor, Raku, or Haxe. Notice some of those are very targeted, which is another reason boilerplate is minimal. It’s built into the language or environment. That makes development fast, but general reuse of the code pretty low.
We have been emphasizing on creating abstractions since forever.
We now have several different hardware platforms, programming languages, OS's, a gazillion web frameworks, tons of databases, build tools, clustering frameworks and on and on and on.
We havn't done so entirely collectively, but I don't think the amount of choice here reflects that we are stupid, but that rather that "one size doesn't fit all". Think about the endless debates and flame wars about the "best" of those abstractions.
I'm sure that Skynet will end that discussion and come up with the one true and only abstraction needed ;)
I feel this some days, but honestly I’m not sure it’s the whole answer. Every piece of code has some purpose or expresses a decision point in a design, and when you “abstract” away those decisions, they don’t usually go away — often they’re just hidden in a library or base class, or become a matter of convention.
Python’s subprocess for example has a lot of args and that reflects the reality that creating processes is finicky and there a lot of subtly different ways to do it. Getting an llm to understand your use case and create a subprocess call for you is much more realistic than imagining some future version of subprocess where the options are just magically gone and it knows what to do or we’ve standardized on only one way to do it and one thing that happens with the pipes and one thing for the return code and all the rest of it.
I actually prefer the world with boilerplate connecting more important pieces of code together, over opinionated frameworks, because the boilerplate can evolve, charging the opinionated frameworks is much harder, and it's probably done by full rewrite. The thing is, the boilerplate needs to be kept to minimum, that's what I consider good API design. It allows you to do custom things, so you need some glue code, but not so much that you are writing a new framework each time you use it.
> Why hasn't there been a minimal-boilerplate language and framework and programming environment?
Haskell mostly solves boilerplate in a typed way and Lisp mostly solves it in an untyped way (I know, I know, roughly speaking).
To put it bluntly, there's an intellectual difficulty barrier associated with understanding problems well enough to systematize away boilerplate and use these languages effectively.
The difficulty gap between writing a ton of boilerplate in Java and completely eliminating that boilerplate in Haskell is roughly analogous to the difficulty gap between bolting on the wheels at a car factory and programming a robot to bolt on the wheels for you. (The GHC compiler devs might be the robot manufacturers in this analogy.) The latter is obviously harder, and despite the labor savings, sometimes the economics of hiring a guy to sit there bolting on wheels still works out.
It's very minimal-boilerplate. It's done an exceptional job of eliminating procedural, tedious work, and it's done it in a way that doesn't even require macros! "Template Haskell" is Haskell's macro system and it's rarely used anymore.
These days, people mostly use things like GHC.Generics (generic programming for stuff like serialization that typically ends up being free performance-wise), newtypes and DerivingVia, the powerful and very generalized type system, and so on.
If you've ever run into a problem and thought "this seems tedious and repetitive", the probability that you could straightforwardly fix that is probably higher in Haskell than in any other language except maybe a Lisp.
I find of all languages, Haskell often allows me to get by with the least boilerplate. Packages like lenses/optics (and yes, scrap your boilerplate/Generics) help. Funny package, though!
>Why haven't we collectively emphasized the creation of new tools to reduce boilerplate and grunt work?
Lisp completely eliminates boilerplate and has been around for decades, but hardly anyone uses it because programs that use macros to eliminate boilerplate aren't easy to read.
It used to be. When I learned to program for windows, I will basically learn Delphi or Visual basic at the time. Maybe some database like paradox. But I was reading a website that lists the skills needed to write backend ant it was like 30 different things to learn.
That's exactly what I have in mind when I wrote the original comment. I learned Visual Basic as a kid faffing around a computer and it was so little boilerplate to make an app. It's been a regression since the.
We have the component architecture pattern to reduce the amount of html we have to write. If you’re duplicating html element in every page, that’s mostly on you. There’s a reason every template language have include statement. That’s a problem that’s been solved for ages.
Theres a million different million environments. This includes, OS, languages, frameworks and setups within those frameworks. Spring, java or kotlin, rest or grpc, mysql or postgres or, okhhtp or ktor, etc etc.
There is no software you could possibly write that works for everything thatd be as good as "Give me an internal dashboard with these features"
> Why haven't we collectively emphasized the creation of new tools to reduce boilerplate and grunt work?
i think it has. How much easier is it today than yester-decade to write, and deploy an application to multiple platforms (and have it look/run similarly)?
I think this one way of looking at what your parent was describing.
They weren’t just saying ‘AI writes the boilerplate for me.’ They were saying: once you’ve written the same glue the 3rd, 4th, 5th time, you can start folding that pattern into your own custom dev tooling.
AI not as a boilerplate writer but as an assistant to build out personal scaffolding toolset quickly and organically. Or maybe you think that should be more systemized and less personal?
Why haven't we collectively emphasized the creation of new tools to reduce boilerplate and grunt work?
You dont understand how things evolve.
There have been plenty of platforms that got rid of boilerplate - e.g. ruby on rails about 20 years ago
But once they become the mainstream, people can get a competitive edge by re-adding loads of complexity and boilerplate again. E.g. complex front end frameworks like react.
If you want your startup to look good you've got to use the latest trendy front end thingummy
Also to be fair, its not just fashion. Features that would have been advanced 20 years ago become taken for granted as time goes on, hence we are always working at the current limit of complexity (and thats why we're always overrun with bugs and always coming up with new platforms to solve all the problems and get rid of all thr boilerplate so that we can invent new boilerplate)
Because of the obsession with backwards compatibility and not breaking code. The web development industry is the prime example. HTML, Javascript, CSS, a backend frontend architecture - absolutely terrible stack.
I don't even know why things like templating and inclusion are not just part of the core web stack (ideally declaratively with no JS). There should be no need for an external tool or build process or third-party framework.
Html is rendered document. It’s ok to write it if you only need one document, but it’s better to use an actual template language or some generators if you’re going to have the same layout and components for many pages.
You’re asking to shift this job from the editor (you) to the viewer (the browser).
Maybe it was a "viewer" in the 90s. The viewer is not a viewer - it is a full fledged application runtime that has a developer environment and media stack, along with several miscellaneous runtimes. A standard template language and document inclusion feature is very small peanuts compared to that. A teeny house compared to the galaxy already built-in - with several planets worth of features being added yearly.
You both make good points, and I come down on the side of adding some template mechanism to web standards. Of course, that all starts with an RFC and a reference implementation. Any volunteers?
Would raise my hand to volunteer for the reference implementation. I guess it would need to be in C++/Rust ? RFC, however, involves way too much talking and also needs solid networking amongst the web crowd. Not qualified for that. For a template language, it would be better to copy a subset from an existing de-facto standard like jinja2 which already has a lean, performant subset implementation at https://github.com/Keats/tera.
Document/template inclusion model should be OK now in modern era thanks to HTTP/3. Not really sure how that should ideally look like though.
Because the set of problems we make to be solvable with code is huge and the world is messy. Many of these things really are at a very high level of abstraction and the boiler plate feels boilerplatey but is actually slightly different in a way not automatable. Or it is but the configuration for that automation becomes the new bit you look at and see as grunt work.