Hacker Newsnew | past | comments | ask | show | jobs | submit | runeks's commentslogin

> I think most people are just going to fly out of the nearest/most convenient airport and hope for the best.

There are many cities in the world with more than a single airport at relatively close distance. Just to name the few I've been to recently: New York City, London, Paris, Dubai.

I think it's useful information if it turns out one of these choices has significantly higher cancellation rates.



Wouldn't the compiler take care of producing the correct machine code?


The issue is that the C memory model allows more behaviours than the memory model of x86-64 processors. You can thus write code which is incorrect according to the C language specification but will happen to work on x86-64 processors. Moving to arm64 (with its weaker memory model than x86-64) will then reveal the latent bug in your program.


And “happen to work on x86-64 processors” also will depend on the compiler. If you write

  *a = 1;
  *b = 'p';
both the compiler and the CPU can freely pick the order in which those two happen (or even execute them in parallel, or do half of one first, then the other, then the other half of the first, but I think those are hypothetical cases)

x86-64 will never do such a swap, but x86-64 compilers might.

If you write

  *a = 1;
  *b = 2;
, things might be different for the C compiler because a and b can alias. The hardware still is free to change that order, though.


This architecture trick was often used for precisely this - finding bugs in the program that would work in one architecture and fail in another. A very common class of issues like these was about endianness, and PowerPC was very handy because it could boot as both high and low-endian modes (I think I remember different versions of Linux for each mode, but I'm no longer sure).


Starting with POWER8, the Linux kernel and some of the BSDs support 64-bit PowerPC in both big- and little-endian modes. Older PowerPC chips had more limited support for little-endian, and all the commercial desktop/server PowerPC OSes that come immediately to mind (classic Mac OS, Mac OS X, NEXTSTEP / OpenStep, OS/400 / IBM i, AIX, BeOS) are big-endian only.

As you'd expect, Linux distribution support for big- and little-endian varies.


OpenBSD famously keeps a lot of esoteric platforms around, because running the same code on multiple architectures reveal a lot of bugs. At least that was one of the arguments previously.


Which is why Windows NT was multiplatform in 1993.

Developed on Intel i860, then MIPS, and only then on x86, alongside Alpha.


Big endian MIPS, no less! At least initially.


I don't think the i860 port lasted very long. IIRC, the performance in context switches was atrocious.


It didn't make it to release, and I believe MS switched internal development more or less as soon as it had an alternative. I do not recall reading any specific detailed look into why, so you may well be right!

Intel let the i860 and indeed the slightly older i960 wither, which was a damned shame. Ditto the Alpha it ended up with, while it sold its Arm licence to Marvell, which at one point recently was worth more than Intel.

I'd like to see a skunkworks project to resurrect one, or several, of these. :-) Attack Arm from a direction it's not expecting. And RISC-V come to that.


What is "correct"? If you write code that stores two values and the compiler emits two stores, that's correct. If the programmer has judged that the order of those stores is important, the compiler may not have any obligation to agree with the programmer. And even if it does, the compiler is likely only obligated to ensure the ordering as seen by the current thread, so two plain load instructions in the proper order would be enough to be "correct." But if the programmer is relying on those stores being seen in a particular order by other threads, then there's a problem.

Compilers can only be relied on to emit code that's correct in terms of the language spec, not the programmer's intent.


The compiler relies on the language and programmer to enforce and follow a memory consistency model


> Taalas’ silicon Llama achieves 17K tokens/sec per user, nearly 10X faster than the current state of the art, while costing 20X less to build, and consuming 10X less power.

Am I reading this right: 10x faster and 10x less power, ie. 100x more power efficient?


> It's 2.5kW so it likely won't sit in your computer (quite beyond what a desktop could provide in power alone to a single card, let alone cool). It's 8.5cm^2 which is a beast of a single die.

I wonder how you cool a 3x3cm die that outputs 2.5 kW of heat. In the article they mention that the traditional setup requires water cooling, but surely this does as well, right?


Can't imagine what else could manage that nearly 2.8W/mm2.

It does make you wonder if they copy is misleading about something so simple how much else could be puffery?

Maybe they mean that a standard liquid cooling system will work?


> Companies don't need "more work" half the "features"/"products" that companies produce is already just extra.

At my company we have a huge backlog where only the top of that iceberg is pulled every iteration to keep customers happy.

If they fired 90% of the engineers assuming a 10x increase in productivity, they might be able to offer their product at half the price. But if they keep all their engineers they'd get 10x the features and could probably charge twice as much for it.


> Sounds like you were just reviewing bad code.

Software engineering in a nutshell


You should lobby for this


I find APL very difficult to read. Incidentally, I am told (by stack overflow) that the APL expression "A B C" can have at least four different meanings depending on context[1]. I suspect there's a connection here.

[1] https://stackoverflow.com/a/75694187


Yes, it's either an array (if A, B and C are arrays), a function derived via the dyadic operator B, with operands A and C being either arrays or functions, a dyadic function call of the dyadic function B (A and C are arrays), or the sequential monadic application of functions A and B to array C, or a derived function as the tacit fork (A, B and C are functions). Did I miss anything?


Yes, it can also a fork where A is an array while B and C are function and a tacit atop where either B is a monadic operator and A its array or function operand or A is a function and C is a monadic operator with B being its array or function operand. Finally, it can be a single derived function where B and C are monadic operators while A is B's array or function operand.


Do APL programmers think this is a good thing? It sounds a lot like how I feel about currying in language that have it (meaning it's terrible because code can't be reasoned about locally, only with a ton of surrounding context, the entire program in the worst case)


It gets me thinking about the “high context / low context” distinction in natural languages. High context languages are one where the meaning of a symbol depends on the context in which it’s embedded.

It’s a continuum, so English is typically considered low context but it does have some examples. “Free as in freedom versus free as in beer,” is one that immediately comes to mind.

À high context language would be one like Chinese where, for example, the character 过 can be a grammatical marker for experiential aspect, a preposition equivalent to “over” “across” or “through” depending on context, a verb with more English equivalents than I care to try and enumerate, an affix similar to “super-“, etc.

When I was first starting to learn Chinese it seemed like this would be hopelessly confusing. But it turns out that human brains are incredibly well adapted to this sort of disambiguation task. So now that I’ve got some time using the language behind me it’s so automatic that I’m not really even aware of it anymore, except to sit here racking my brain for examples like this for the purpose of relating an anecdote.

I would bet that it’s a similar story for APL: initially seems weird if you aren’t used to it, but not actually a problem in practice.


It makes parsing tricky. But for the programmer it’s rarely an issue, as typically definitions are physically close. Some variants like BQN avoids this ambiguity by imposing a naming scheme (function names upper case, array names lower case or similar).


I am not good enough with APL to be certain but I think you can generally avoid most of these sorts of ambiguities and the terseness of APL helps a great deal because the required context is never far away, generally don't even have to scroll. I have been following this thread to see what the more experienced have to say, decided to force the issue.


Huh? Currying doesn't require any nonlocal reasoning. It's just the convention of preferring functions of type a -> (b -> c) to functions of type (a, b) -> c. (Most programming languages use the latter.)


Of course it requires non-local reasoning. You either get a function back or a value back depending on if you've passed all the arguments. With normal function calling in C-family languages you know that a function body is called when you do `foo(1, 2, 3)` or you get a compilation error or something. In a currying language you just get a new function back.


Functions are just a different kind of value. Needing to know the type of the values you're using when you use them isn't "nonlocal reasoning".

And it's not like curried function application involves type-driven parsing or anything. (f x y) is just parsed and compiled as two function calls ((f x) y), regardless of the type of anything involved, just as (x * y * z) is parsed as ((x * y) * z) in mainstream languages. (Except for C, because C actually does have type-driven parsing for the asterisk.)

Another way to look at it: languages like Haskell only have functions with one argument, and function application is just written "f x" instead of "f(x)". Everything follows from there. Not a huge difference.


It arguably depends on the syntax.

In an ML-like syntax where there aren’t any delimiters to surround function arguments, I agree it can get a little ambiguous because you need to know the full function signature to tell whether an application is partial.

But there are also languages like F# that tame this a bit with things like the forward application operator |> that, in my opinion, largely solve the readability problem.

And there are languages like Clojure that don’t curry functions by default and instead provide a partial application syntax that makes what’s happening a bit more obvious.


>> Did I miss anything?

Derived operators?

And, 'A B C' as an array isn't valid (ISO) APL but an extension, the 'array syntax' only covers numbers and the parser is supposed to treat is as a single token.

Your useless information of the day...


And they could be 0- or 1- indexed? :P


> The conclusion? AI is a world-changing technology, just like the railroads were, and it is going to soon explode in a huge bubble - just like the railroads did.

Why "soon"? All your arguments may be correct, but none of them imply when the pending implosion will happen.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: