Hacker Newsnew | past | comments | ask | show | jobs | submit | exmadscientist's commentslogin

Related to that, for a consumer electronics product I worked on using an ARM Cortex-M4 series microcontroller, I actually ended up writing a custom pseudorandom number generation routine (well, modifying one off the shelf). I was able to take the magic mixing constants and change them to things that could be loaded as single immediates using the crazy Thumb-2 immediate instructions. It passed every randomness test I could throw at it.

By not having to pull in anything from the constant pools and thereby avoid memory stalls in the fast path, we got to use random numbers profligately and still run quickly and efficiently, and get to sleep quickly and efficiently. It was a fun little piece of engineering. I'm not sure how much it mattered, but I enjoyed writing it. (I think I did most of it after hours either way.)

Alas, I don't think it ever shipped because we eventually moved to an even smaller and cheaper Cortex-M0 processor which lacked those instructions. Also my successor on that project threw most of it out and rewrote it, for reasons both good and bad.


Ah, Restoration Hardware. I've always thought the jwz quote applies perfectly to them: if I have a home furnishing problem, and I go to Restoration Hardware, now I have two problems. It's not enough to have their stores full of stuff I don't want, they've also got to turn them in to labyrinths too.

The Mob Museum was great though.


It doesn't look like it has a CAS (so it's not for mathematicians), and the scientific notation key isn't prominent (so it's not for scientists or engineers), so... who is it for? Part of the thing with the older TI calculators is that they were good for professionals, too, not just students. (My TI-89 is still in intermittently-very-heavy use 30 years later!)

Cute, but dreadfully silly.

The giveaway is the handling of uncertainty. That's too many decimal places for some of these measurements: 10um (0.01mm) is not reliably measurable by a cheapo caliper, and even trying to do it with a good caliper or micrometer, you'll find that everyday objects simply cannot be reliably straightforwardly measured with that level of precision. (You need cleaning procedures, standardized handling, standardized sampling, etc.) And quoting "4.1g (5.1% too heavy)" versus "4.0g (2.6% too heavy)" is just absurd: that last digit really doesn't mean much. So don't treat it like it does.

For example, on my random first d6 at hand, I get 4.47g from my nice scale and somewhere between 14.82 and 14.85 mm on the first face dimension, depending on how I measure, from my Mitutoyo caliper. I have a micrometer in the shop, but you can see that it'd be pointless to go get it. The next two faces are (14.79 to 14.84) and (14.76 to 14.87), so it's consistently like this.

Likewise, χ² to five decimal places isn't terribly useful... especially since you haven't really described the test you're running....

In general there's a lot of "look at me make measurements" here that might be impressive. There is very little "what is the true value of this measurement, and how well can we assert that", and simply not enough "is this the right thing to be measuring, and how much does that factor matter". That last one is critical: the actual weight of a die is, I think, not important at all. It's weight distribution that matters, so who cares about 0.1g of difference. Unless you're making a batch uniformity claim? But really this evidence just says more about your measuring equipment. And it's well known that different color resins, especially black, white, and red, are pretty differently loaded with pigments, so they have different properties. You can't just expect them to be the same, but the author seems surprised that they aren't.

And then we get to "These dice are safe to use" without any real description of the criteria or threshold. I say "this report is not safe to use (for serious purposes)"!

It's cute, it's a fun little minute to read on the internet this morning. But it's silly, and if my students back in the day or coworkers today sent it to me, they'd be getting red ink and remedial lectures in measurement uncertainty.


You can really only use AI for: things that are easy to verify; things that you already know how to do but want done faster; things you're learning to do and are just one step out of your reach (so it's still comprehensible to you); or, things that just plain don't matter.

That's a lot of stuff, but it also doesn't include a lot of the stuff people claim AI can do.


I was a smartwatch skeptic for a long time but finally traded my Timex out for a Garmin a little over a year ago. I paid way too much money for it but was able to get one that matched the Timex as close as reasonably possible: about the same size and weight, and the MIP display that's always on. It's one of the smaller ("S") models so the battery is fairly crap (physically smaller means way smaller battery), but unless I'm extremely active it does just fine by charging when I'm in the shower. Not charging I get about a week of normal activity (including GPS on for actual activities), maybe a bit less? It does much better if I'm not tracking anything but what's the point of that?

It's made a huge difference in my life, for the positive, and that's really surprised me. I kind of expected to hate it. It only notifies me when my phone vibrates, and I've got my phone set to be particular about notifications, so that doesn't happen often. But it does mean I miss notifications and messages way less often. I used to never notice vibration alerts if I was out walking and my phone was in my pocket. Now I can respond to people moderately quickly!

The sleep tracking is kind of worthless, but it's nice to have stats. It's mostly useful to notice longer-term trends or if something went horribly wrong (as it did last night for me), you just have it there and have something to look at, already collected.

It tells the time accurately: no more mentally compensating in my head for the drift of the watch (admittedly, my last Timex, despite being great in all other respects, was the driftiest quartz watch I've ever owned).

But the fitness tracking... the fitness tracking has genuinely been effective in actually getting me to go out and do things. I really love seeing maps of where I've been when I take a city walk, or getting run stats as I slowly level up as a runner. I don't take it particularly seriously and I think that's just about right.

I really expected to hate this thing, but instead I love it. Maybe that's because I treat it as a dumbwatch plus fitness tracker and notification bell for my phone? The idea of having games, much less a web browser, on it really does sound ghastly.


Yeah, it's wrong.

Nasdaq, Inc. is a company with a stock market ("the NASDAQ") and an index "Nasdaq 100"). They want SpaceX to be listed on their market, because they like having more things on their market for all the usual reasons. They are, apparently, offering to manipulate their index to win the listing.

Accordingly, anything that uses or tracks this particular index (Nasdaq 100), such as the QQQ fund, will potentially have to pay for this manipulation.

Anybody not holding or indexing to the Nasdaq 100 index contents will not particularly care and will not really gain or lose any more money than on an ordinary trading day. In particular, this will have zero effect on stocks that merely trade on the NASDAQ exchange.

Indexing to the Nasdaq 100 is pretty uncommon, outside of QQQ, so most people will not care.


What?! This absolutely affects more than Nasdaq 100 / QQQ.

The index is just a function of the stocks. It only moves if the underlying stocks move. Rebalancing Nasdaq will cause selling in the 100 companies that aren’t SpaceX. And those stocks are held elsewhere too…

The Nasdaq 100 shares 79/100 stocks with the S&P. So if those stocks move (probably down because they’re being sold so SpaceX can get bought) pretty sure that's gonna affect anyone exposed to those companies. Whether that’s directly or through other index ETFs. Many of which have a huge concentration in Mag7 right now, for example.


What you're saying is 100% correct, I fail to see how people are not aware of it.

We're talking about a $1.75 trillion (as per the article) company that is about to enter (a part) of the most important capital market in the world at a distorted price, of course that the market as a whole is going to become distorted, money and capital (and the accompanying money and capital signals) are one of the most "liquid" things in a modern economy (if not the most liquid), once you start putting a wrong price tag on them then those accompanying money and capital signals will for sure start doing their thing, imo that was one of the main lessons we should have taken from what happened back in 2008-2009.


Sorry, a lot of the comments around this have been really badly written and it's been hard to tell what they're actually arguing.

I countered a different argument (which does appear elsewhere in this thread). You are absolutely right that there will be general price distortion from this mess. I disagree that it will be extremely bad, but I do agree that it's a problem and needs attention. It's just been difficult to tell that this is what some comments have meant to discuss, instead of the more basic issues others have been talking about.


Ah, I re-read my original comment with that in mind, and I see how it can go a few directions depending on the context - thanks!

Unfortunately a gaming machine workload is so read-heavy that I wouldn't expect Optane to square up well. Gaming is all about read speed and overall capacity. You need that heavy I/O mix, especially with low latency deadlines, to see gains from Optane. That limited target use case, coupled with ignorant benchmarking, always limited them.

Around the time of Optane's discontinuation, the rumor mill was saying that the real reason it got the axe was that it couldn't be shrunk any, so its costs would never go down. Does anyone know if that's true? I never heard anything solid, but it made a lot of sense given what we know about Optane's fab process.

And if no shrink was possible, is that because it was (a) possible but too hard; (b) known blocks to a die shrink; or (c) execs didn't want to pay to find out?


I think it was killed primarily because the DIMM version had a terrible programming API. There was no way to pin a cache line, update it and flush, so no existing database buffer pool algorithms were compatible with it. Some academic work tried to address this, but I don’t know of any products.

The SSD form factor wasn’t any faster at writes than NAND + capacitor-backed power loss protection. The read path was faster, but only in time to first byte. NAND had comparable / better throughput. I forget where the cutoff was, but I think it was less than 4-16KB, which are typical database read sizes.

So, the DIMMs were unprogrammable, and the SSDs had a “sometimes faster, but it depends” performance story.


It sounds like they didn't do a good job of putting the DIMM version in the hands of folks who'd write the drivers just for fun.

The read path is sort of a wash, but writes are still unequalled. NAND writes feel like you're mailing a letter to the floating gate...


Isn't this addressed by newer PCIe standards? Of course, even the "new" Optane media reviewed in OP is stuck on PCIe 4.0...

No; the issue with the DIMMs wasn’t drivers. The issue was that the only people allowed to target the DIMMs directly were the xeon hardware team.

There was a startup doing good work with similar storage chips that were pin (BGA) compatible with standard memory. Not sure what happened to them. That’d be easier to program than xpoint.

As for the new PCIe standard (you probably mean CXL), that’s also basically dead on arrival. The CPU is the power and money bottleneck for the applications it targets, so they provide a synchronous hardware API that stalls the processor pipeline when accessing high-latency devices.

Contrast this to NVMe, which can be set up to either never block the CPU or amortize multiple I/Os per cache miss.

Companies like NVIDIA are already able to maintain massive I/O concurrency over PCIe without CXL, because they have a programming model (the GPU) that supports it. CXL might be a small win for that.


Interesting perspective re CXL synchronous API. Wouldn't things like OOO execution and speculation help with that? And anyway the latency is supposed to be comparable to NUMA latency, is that really such a deal breaker?

The DIMMs were their own shitshow and I don't know how they even made it as far as they did.

The SSDs were never going to be dominant at straight read or write workloads, but they were absolutely king of the hill at mixed workloads because, as you note, time to first byte was so low that they switched between read and write faster than anything short of DRAM. This was really, really useful for a lot of workloads, but benchmarkers rarely bothered to look at this corner... despite it being, say, the exact workload of an OS boot drive.

For years there was nothing that could touch them in that corner (OS drive, swap drive, etc) and to this day it's unclear if the best modern drives still can or can't compete.


That's at least physically half-plausible, but it would be a terrible reason if true. 3.5 in. format hard drives can't be shrunk any, and their costs are correspondingly high, but they still sell - newer versions of NVMe even provide support for them. Same for LTO tape cartridges. Perhaps they expected other persistent-memory technologies to ultimately do better, but we haven't really seen this.

Worth noting though that Optane is also power-hungry for writes compared to NAND. Even when it was current, people noticed this. It's a blocker for many otherwise-plausible use cases, especially re: modern large-scale AI where power is a key consideration.


> 3.5 in. format hard drives can't be shrunk any,

You're looking at the entirely wrong kind of shrinking. Hard drives are still (gradually) improving storage density: the physical size of a byte on a platter does go down over time.

Optane's memory cells had little or no room for shrinking, and Optane lacked 3D NAND's ability to add more layers with only a small cost increase.


Flash has the same shrink problem. And the solution for Optane was the same: go 3D

I don't think the shrink problem is at all the same for the two technologies. There are some really weird materials and production steps in Optane that are simply not present when making Flash cells.

durability drops quickly with shrinking flash, we won't see much smaller cells, the growth has been MLC-TLC-> QLC and stacking

The actual strength of Optane was on mixed workloads. It's hard to write a flash cell (read-erase-write cycle, higher program voltage, settling time, et cetera). Optane didn't have any of that baggage.

This showed up as amazing numbers on a 50%-read, 50%-write mix. Which, guess what, a lot of real workloads have, but benchmarks don't often cover well. This is why it's a great OS boot drive: there's so much cruddy logging going on (writes) at the same time as reads to actually load the OS. So Optane was king there.


It’s the best OS drive especially p5800x.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: