Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For what it's worth, I'm stuck on the very first x = x + 1 thing.

Not sure if you want to call it a screwup or bad grammar or whatnot, but it is perhaps the huge mistake that the "equals" sign was used for something that feels like, but emphatically DOES NOT mean, "is equal to."

It's "put this into that". It's an action verb. Should have perhaps insisted on x <- x + 1 or maybe better x + 1 -> x



> It's an action verb.

The difference is that it is an instruction. Conventional mathematical notation, while declarative by default, switches into instruction mode just the same with the "let" keyword. The usage of the "=" operator then becomes equivalent: e.g. let x = 1.

But as the aforementioned x = x + 1 comes from notations that are never declarative, where every statement is an instruction, the "let" keyword is unnecessary and redundant. You already know you are looking at an instruction.

> Should have perhaps insisted on x <- x + 1 or maybe better x + 1 -> x

While that would obviously work, it strays further from the conventional notation. Which doesn't matter to the universe in any way, but the topic of discussion here is about trying to stay true to conventional notation, so...


Is that the topic?

I do think what I'm trying to say here is: sorry, but your "conventional notation" sucks because "what is actually happening" is so very different from how the thing is overwhelmingly used for most people.


> Is that the topic?

Yes. "=" doesn't mean anything in the void of space. Conventional mathematical notation is what established "=" as meaning "equal to", as referenced by the original comment. But the same notation also uses it for assignment when in instruction mode, so imperative languages that use x = x + 1 syntax are quite consistent with it.

> but your "conventional notation" sucks

Maybe, but it's all anyone really knows nowadays. It's what you are going to learn in math class in school. It's what you are going to find used in mathematical papers. It is how you are going to express mathematical concepts to your friends and colleagues. Worse is better, I suppose, but it is what has set the standard. It is the de facto language of math. For whatever shortcomings it does have, virtually everyone on earth recognizes it, which is very powerful.

> so very different from how the thing is overwhelmingly used for most people.

I'm not sure how to grok this. let x = 1 is something anyone who has taken high school math will have encountered. Assignment is perfectly in line with the understandings of most people.

Do you mean that expression entirely using imperative constructs is unfamiliar to those who grew up with a primarily declarative view of math? That might be fair, but I'm not sure x = x + 1 is a specific stumbling block in that case. One has to understand imperative logic in its entirety to use these languages anyway, at which point nobody is going to think that x = x + 1 is intended to be declarative equality.


In one character

    x ← x + 1


I prefer pascal's way: x := x + 1.


I believe you can blame Ken Thompson for this. In DMR's paper about early C history, he says:

> Other fiddles in the transition from BCPL to B were introduced as a matter of taste, and some remain controversial, for example the decision to use the single character = for assignment instead of :=.

I think Ken did most of the early design of B and DMR came along later to help with C. Ken has a famously terse style, so I can definitely see it being on brand to shave of a character from `:=`, which is what BCPL uses.

It's sort of a tricky little syntax problem. It makes perfect sense to use `=` for declarations:

    int x = 1;
At that point, you really are defining a thing, and `=` is the natural syntax for a definition. (It's what BCPL uses for defining most named things.)

You also need a syntax for mutating assignment. You can define a separate operator for that (like `:=`). Then there is the question of equality. From math, `=` is the natural notation because the operator there is often overloaded as an equality predicate.

Now you're in a funny spot. Declaring a variable and assigning to it later are semantically very similar operations. Both ultimately calculate a value and store it in memory. But they have different syntax. Meanwhile, defining a variable and testing two expressions for equality use the same syntax but have utterly unrelated semantics.

Given that math notation has grown chaotically over hundreds of years, it's just really hard to build an elegant notation that is both familiar to people and consistent and coherent.


As you say there was a familiar declarative notation. Was there a familiar imperative notation?

    x ← x + 1
Not familiar, perhaps understandable.


That’s funny, because to me, it was always immediately obvious that once that line runs, then it must be true - once that line runs, x is now equal to whatever it was before, + 1. It’s the price opposite of the lie, it’s the literal definition of truth, in the sense that what is true is set by that line.

Programming is telling things to the computer, and in this case, you’re telling it what x is. Whatever you tell a computer is all the computer knows, whatever a computer does can only be what it’s told. If you never told it what x was, then x wouldn’t be anything… that’s the truth.


> once that line runs

This is the key point. Some people have a mental model that algebraic syntax is describing a set of immutable properties. You are defining things and giving them names, but not changing anything. There is no notion of time or sequence, because the universe is unchanging anyway. All you're doing is elucidating it.

Then there is a model where you are molding the state of a computer one imperative modification at a time. Sequence is essential because everything is a delta based on the state of the world before.

You have the latter model, which is indeed how the hardware behaves. But people with a mathematical mindset often have the former and find the latter very unintuitive.


It's hard for me to not suggest though that, essentially -- the math people are "right." Which is to say, "=" meant what it meant for 400 years, and then computers come along and redefine it to be an action verb and not an identity. And I think it's fair to consider that a mistake.


Doubling down on my downvotes, then:

Look, I teach IT (to both novices and interested folks at a college) for a living; count this idea as one that, when I present it this simply, many a lightbulb goes on for those new to programming. Much like for zero-indexed arrays, I do very well with "look, this is stupid, but here's why it got that way, we'll just deal with it."

And while on occasion I take pride in nerd-dom, I feel like -- especially with the advent of AI for coding -- this is dinosaur stuff, where we would do better to go along with what's WAY more intuitive than to try to stick with mathematical or programming purity, etc.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: