A torch isn't just a bundle of sticks that provides light. It needs to be soaked in liquid fuel to work - like kerosene or wax - and it's a messy, smoky thing even then. It's completely unsuitable for use indoors, and it certainly isn't a cheaper alternative to candles.
Or perhaps you haven't. Many common resources are perfectly viable for making torches from including resin from a variety of common tree and plant types. If you want to get fancy you can even make your own pitch. It was the reason I added 'lush'. And why in the world would you want to be indoors? It'd be vastly more pleasant at e.g. your gazebo or wherever else, of course subject to climatic/weather extremes.
The concept of spending vast amounts of time indoors for both recreation and work is an extremely new thing. Not only post-electric but largely post-internet.
In the year 2000, a M1 MacBook Air would have been the world's fastest supercomputer (or second fastest if you had the base model with the 7-core GPU).
Impressive, of course; but not quite that impressive.
Only true if all you're running is matmul (supercomputer has general purpose CPUs so more flexible than M1 GPU) - also those flops are probably FP64 in supercomputer ratings and FP32 in M1.
As a smart man I knew used to say, supercomputers are about I/O not raw compute. Those have terabytes of RAM not 8GB.
Your question hits directly at latency vs. throughput distinction. Depends on which you mean by "fast."
Throughput-wise, the supercomputer is competitive because it has a lot of local RAM connected to lots of independent nodes, which, in aggregate, is comparable to modern laptop's RAM throughput (still much more than disk) with a caveat, that you can only leverage the supercomputer bandwidth if your workload is embarrassingly parallel running on all nodes[1]. Latency-wise, old RAM still beats NVMe by two or three orders of magnitude.
[1]: there's another advantage that supercomputer has which is lots more of local SRAM caches. If the workload is parallel and can benefit from cache locality, it blows away the modern microprocessor.
> i can't help but wonder why apple appears to be fully singular in their arm dominance
I have to imagine that a big part of it is the company can plan and act as a single unit. The teams building the CPU, the computers which house that CPU, and the operating system and software that'll run on those computers are all working together, and can plan new features which cut across those boundaries. Other ARM CPU/system manufacturers don't have that advantage.
The early aluminum MacBook systems used a hinged trackpad. The "click" was a physical button under the trackpad, and you couldn't click on the top of the trackpad (because the hinge was on that side).
The MacBook Neo is a return to physical clicking, but they're using some sort of new mechanism which allows clicking anywhere.
Mine as well (well, not in Texas). I have fond memories of building a bunch of simple adventure games with my classmates - it was incredibly easy to learn how to use the authoring tools.
That's definitely a pattern which I've seen in some LLM output, especially when users let a LLM "run away" with an idea and write a lot of text without supervision. The drive to coin names for things feels almost characteristic of self-help or lifestyle advice writing.
This seems like a huge risk factor for users who are at risk for schizophrenia - if someone is using the LLM as an "AI companion", the model is likely to reinforce, or even suggest, illusory connections between events or experiences the user has described in their conversations.
reply