Hacker Newsnew | past | comments | ask | show | jobs | submit | mattacular's commentslogin

The AirBNB JS style guide is kind of like a relic in time from the transition to ES6 etc. Leave it in the past. Their only argument for not supporting eg. "for-of" is because of "legacy" browsers not supporting it. Maybe a salient argument at the time of writing (these issue threads are nearly 10 years ago), but certainly not today.

It’s mostly the argument of a single guy (ljharb), who has very strong and weird opinions.

There were several extremely long and unproductive GitHub discussion with him posted to HN before, where he sent unsolicited and unwanted PRs to lots of projects that tanked performance by orders of magnitude and introduced tens of dependencies (he wrote himself), to ensure that projects would be backwards compatible with completely irrelevant targets they never supported in the first place (like Internet Explorer 1.3 under Windows 95 or something along those lines).

When projects did not want to take these contributions, he played his TC39 authority card, and when that did not help became more and more incoherent in his ramblings.

My take is that this guy actually wants to and thinks he is doing good, while actually having lost the plot a while ago.


very interesting thread

i used to be like ljharb, changing company/stack made me live in python for loops but then i miss composable operators ..


> Their only argument for not supporting eg. "for-of" is because of "legacy" browsers not supporting it.

Not really, their rational is written in the document I linked --

    Why? This enforces our immutable rule. Dealing with pure functions that return values is easier to reason about than side effects.

    // bad
    let sum = 0;
    for (let num of numbers) {
      sum += num;
    }
    sum === 15;

    // good
    let sum = 0;
    numbers.forEach((num) => {
      sum += num;
    });
    sum === 15;
Which is... ridiculous. None of this is actually immutable, as "sum" is constantly being modified. A real FP purist would be using "reduce" in the "good" example. Otherwise, forEach is not better than for-of in terms of readability or maintainability in any way.

In addition, I think an issue is that for a long time, when you use ESLint CLI to create a new config file, airbnb is the default option, which ends up making it used very widely, even after the config itself is in maintenance mode. They were only removed in 2024: https://github.com/eslint/create-config/pull/108


In the specific example, both have side effects... if you really wanted to avoid them, you'd use .reduce, and even then, depends on the amount of data. More often than not, if you're doing these kinds of things in JS on in-memory data, you're probably doing something wrong and have bigger concerns imo.

I say this as someone who loves JS since before The Good Parts book.


Nothing can remove complexity other than simplifying requirements. It can only be shuffled around and distributed to other areas of the system (or library, or vendor functionality etc)

I think this is true for essential complexity. And indeed it's one of the best reasons to release early and often, because usage helps clarify which parts of the requirements are truly required.

But plenty of projects add quite a lot of incidental complexity, especially with technology choices. E.g., Resume Driven Development encourages picking impressive or novel tools, when something much simpler would do.

Another big source of unneeded complexity is code for possibilities that never come to fruition, or that are essentially historical. Sometimes that about requirements, but often it's about addressing engineer anxiety.


You absolutely can remove unnecessary complexity. If your app makes an http request for every result row in a search, you'll simplify by getting them all in one shot.

Learn what's happening a level or two lower, look carefully, and you'll find VAST unnecessary complexity in most modern software.


I'm not talking about unnecessary (nor incidental) complexity. That is a whole other can of worms. I am talking about the complexity required given what you need to a system to spec. If choices are made to introduce unnecessary complexity (eg. "resume driven development" or whatever you want to call the proclivity to chase new tech) - that is a different problem. Sometimes it can be eliminated through practical considerations. Sometimes organization politics and other entrenched forces prevent it.

If - to take a convenient example - I use a library sorting function instead of writing my own sorting code, it's true that I haven't removed the complexity of the work my program is doing: It sorts. But I have arguably reduced the complexity of my code.

Similarly, if I factor out some well-named function instead of repeating the same sequence actions in multiple places - the work to be done is just as complex, and I haven't even removed the complexity from my code, but - I have traded the complexity of N different pieces of code for 1 such piece plus N function calls. Granted, that tradeoff isn't always the right thing to do, but one could still claim that, often, that _does_ reduce the complexity of the code.


Explain "why not what" is good general advice. My further advice for comments is: even bad comments can be useful (unless they're from LLM output maybe...) therefore when in doubt, write a comment. Write it in your own words.

Had to add the last sentence for the circa 2020s developer experience. LLM comments are almost never useful since they're supposed to convey meaningful information to another human coder, anything your human brain can think of will probably be more helpful context.


I strongly disagree. If you're using something like Claude Code to generate the code, it has significant context about the task, which from my experience provides very useful (albeit overly verbose) comments. I sometimes edit/rewrite its comments (as I might with the code itself), but I would never ask it to generate uncommented code.

I always think LLM comments are more about helping themselves to stay on track.

Same goes for human comments tbf

AI comments are fine for high level summaries of the what/how. They fail at the why, which is where we come in.

> even bad comments can be useful

Bad comments aren’t just less helpful than possible, they’re often harmful.

Once you’ve hit a couple misleading comments in a code base (ie not updated, flatly wrong, or deeply confusing due to term misuse), the calculus swings to actively ignoring those lying lies and reading the code directly. And the kind of mind resorting to prose when struggling to clarify programming frequently struggles with both maintenance and clarity of prose.


I hear this argument as an excuse not to write comments (sometimes at all). Maybe I am just lucky but I have never had this issue as you've described in codebases, and if I did, certainly not to that extent where it became a memorable thorn in my side.

If there are no comments, you are reading the code (or some relatively far away document) for all understanding anyway. If there are inaccurate comments, worst case you're in the same boat except maybe proceeding with a bit more caution next comment you come across. I always ask of fellow engineers: why is it unduly difficult to also fix/change the comments as you alter the code they refer to? How and when to use comments is a balancing of trade-offs: potential maintenance burden in the future if next writers are lazy vs. opportunity to share critical context nearest its subject in a place where the reader is most likely to encounter it.


The soap opera effect (caused by motion smoothing and similar settings) is the one that bugs me most. It's good for sports where the ball is in motion and that's it. Makes everything else look absolutely terrible, yet is on by default on most modern tvs.

My Nest works great other than the app trying me to get to change my account to Google, which I just close out of every-time. Basic functional UI and works as billed. The unit itself has a nice sturdy feel to it with a very intuitive wheel-and-click interface. It's the only smarthome thing I have besides Hue lightbulbs

Really nice, I found it highly intuitive on first use. Only thing I might suggest is making it more obvious what the "handle" button is that initiates the pick.


They can detect golang pretty reliably by fingerprinting the requests they handle (ie. TLS handshake) unless the app developer has taken some explicit measures to counter it.


> The whole web ecosystem was first run by VC money and everything was great until every corner was taken,

Categorically untrue and weird revisionism. Basically the opposite of what actually happened.


I agree with the untrue and revisionism bit, but I disagree with it being the opposite of what happened.

People were trying to figure out how to make money off of the Internet from the early days of the Internet being publicly accessible (rather than a tool used by academic and military institutions). It can be attributed to the downfall of Gopher. It can be attributed to the rise of Netscape and Internet Explorer. While the early web was nowhere near as commercial as it is today, we quickly saw the development of search engines and (ad supported) hosting services that were. By the time 2000's hit, VC money was very much starting to drive the game. In the minds of most people, the Internet was only 5 to 10 years old at that point. (The actual Internet may be much older, but few people took notice of it until the mid-1990's.)


> People were trying to figure out how to make money off of the Internet from the early days of the Internet being publicly accessible

People were doing that even in ARPANET days. The commercial aspect was seen as a strong incentive to make ARPANET accessible by the masses.


> And the industry might finally be waking up to the fact that writing code is a small part of producing software.

Typing code and navigating syntax is the smallest part. We're a solid 30 years into industrialized software development. Better late than never?


Genuine question: Why would anyone want to read that? I glanced at the first sentence and decided not to go any further.

It is hollow text. It has no properties of what I'd want to get out of even the worst book produced by human minds.

Even more sophisticated models have a ceiling of pablum.


While hollow, it is also bad (and absurd) enough to be quite entertaining. It’s from an era where this wasn’t far off the state of the art for coming up with machine-generated text—context that makes it quite a bit funnier than if it were generated by an LLM today.

That said, it’s obviously not to everyone’s tastes!


It's purely meant to be an absurdist read. It obviously makes no sense, yet is close enough to actual language patterns for (some, at least) people to find it hilarious. I had tears in my eyes from laughing way too hard when I first read it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: