To be honest I was very surprised to hear what a cache SRE was working on. It sounded like he had to build all of handling of hardware issues, rack awareness and other basic datacenter stuff himself. Does it mean that every specialized team also had to do it? Why would cache engineer need to know about hardware failures at all, its datacenter team's responsibility to detect and predict issues and shutdown servers gracefully if possible. It should be completely abstracted from cache SRE, like cloud abstracts you from it. Yet he and is team spends years on automation around this stuff using Mesos stack that they probably regret adopting by now.
I feel like in this zoomed in case of twitter caches what they were working on is questionable, but the team size seems to be adequate to the task, so my takeaway is that like any older, larger company Twitter accumulated fair amount of tech debt and there is no one to take large scale initiative to eliminate it.
Seriously, who cares about 5ms difference? Or even 25ms. I don't. And Asia is just used to longer latencies.
Edge compute became a commodity even before it was born.
The whole point of edge workers is to be able to provide very low latency all around the world... once you have that, it opens up whole new classes of workloads. Yes, 25ms doesn't matter for most current use cases.... but that is partly because the use cases that required lower latency weren't possible before.
I’d love to be able to (affordably) spin up a low-latency Windows desktop for a few hours at a time, that I could use for applications which don’t run well on my bottom-tier M1 Mac
Interesting. I would be curious to hear why pinning here improves performance. Is this something specific to the BEAM VM? Does this come at hit to K8S scheduler flexibility?
I don't have experience with k8s, but with BEAM on a traditional system, if BEAM is using the bulk of your CPU, you'll tend to get better results if each of the (main) BEAM scheduler threads is pinned to one CPU thread. Then all of the BEAM scheduler balancing can work properly. If both the OS and BEAM are trying to balance things, you can end up with a lot of extra task movement or extra lock contention when a BEAM thread gets descheduled by the OS to run a different BEAM thread that wants the same lock.
On most of the systems I ran, we didn't tend to have much of anything running on BEAMs dirty schedulers or other OS processes. If you have more of a mix of things, leaving things unpinned may work better.
I am afraid this article will resurrect shortage even if it ended lately: I never cared about pasta shapes, but now I want bucatini. With 52 hacker news points there goes national stock of bucatini.
I'm not sure why this article is being published now (dated December 28, 2020) if the shortage was really in March. It doesn't seem like it's still an issue. Amazon has tons of bucatini for sale, many available in a day with Prime: https://www.amazon.com/s?k=bucatini
People don't seem to take action based on these things. I was concerned about not being able to buy vitamin D because of so many articles positively correlating it with good COVID-19 outcomes... but had no trouble buying it recently.
Wegmans is a no-go where I live, thanks to an old 'handshake agreement' between the Wegman and Golub families, in that there would be no overlap between Wegmans and Price Chopper territory. :(
I'll have to check Hannaford and Shoprite instead.
I found that comment kind of odd, since the real king of sauce absorption is spaghetti rigati - spaghetti with ridges.
Bucatini has good sauce absorption but imho its claim to fame is its thickness, mouthfeel, and rigidity, which the author hilariously and accurately refers to as "sentient". It has a mind of its own.
There are two similar noodle shapes, pici and Strozzapreti (literally "priest stranglers") but they are even harder to find.
I dont understand how would such highly contagious virus suppress and contain itself if it started in CA months ago. As if Californians have no outbound travel. Impossible scenario.
I don’t think anyone is claiming that it suppressed and contained itself. They’re claiming it’s more widespread and more mild. There’s a good graphic in the economist article showing two different peaks that look the same early on.
It is an option, in fact there are a lot of open source options. The subscription model we offer is one way to gain the ability without having to worry about the complications of hosting it.
If ntpd gets notified of an impeding leap second via its peers (or the connected radio clock, GPS receiver, ...) it will set (struct timex*)->status |= STA_INS via the timex syscall (which it also uses to steer/speed-up/slow-down the clock).
The stock linux kernel works such that if ntpd will have set the STA_INS flag via adjtimex some time before, the kernel will do the leap-second insertion at the end of the UTC day.
If you disable ntpd and it doesn't reset this flag (which I doubt it does, but you'd have to check), the kernel will insert the leap second on its own, even if ntpd is not running.
If you disable ntpd, and either ntpd on termination (which I doubt), or you via the adjtimex syscall, clear the STA_INS flag, then the kernel will not insert the leap second. After UTC midnight, the clock will then be one second off, and a restart of ntpd will slowly steer the clock back to correct time.
For playing with all of this, there's a adjtimex tool which can display and even change the timex values: