Hacker Newsnew | past | comments | ask | show | jobs | submit | foton1981's commentslogin

To be honest I was very surprised to hear what a cache SRE was working on. It sounded like he had to build all of handling of hardware issues, rack awareness and other basic datacenter stuff himself. Does it mean that every specialized team also had to do it? Why would cache engineer need to know about hardware failures at all, its datacenter team's responsibility to detect and predict issues and shutdown servers gracefully if possible. It should be completely abstracted from cache SRE, like cloud abstracts you from it. Yet he and is team spends years on automation around this stuff using Mesos stack that they probably regret adopting by now. I feel like in this zoomed in case of twitter caches what they were working on is questionable, but the team size seems to be adequate to the task, so my takeaway is that like any older, larger company Twitter accumulated fair amount of tech debt and there is no one to take large scale initiative to eliminate it.


Seriously, who cares about 5ms difference? Or even 25ms. I don't. And Asia is just used to longer latencies. Edge compute became a commodity even before it was born.


The whole point of edge workers is to be able to provide very low latency all around the world... once you have that, it opens up whole new classes of workloads. Yes, 25ms doesn't matter for most current use cases.... but that is partly because the use cases that required lower latency weren't possible before.


I’d love to be able to (affordably) spin up a low-latency Windows desktop for a few hours at a time, that I could use for applications which don’t run well on my bottom-tier M1 Mac


Kubernetes makes CPU pinning rather simple. Just need to meet conditions to reach Guaranteed QoS. https://kubernetes.io/docs/tasks/administer-cluster/cpu-mana...

We are running lots of Erlang on k8s and CPU pinning improves performance of Erlang schedulers tremendously.


Interesting. I would be curious to hear why pinning here improves performance. Is this something specific to the BEAM VM? Does this come at hit to K8S scheduler flexibility?


I don't have experience with k8s, but with BEAM on a traditional system, if BEAM is using the bulk of your CPU, you'll tend to get better results if each of the (main) BEAM scheduler threads is pinned to one CPU thread. Then all of the BEAM scheduler balancing can work properly. If both the OS and BEAM are trying to balance things, you can end up with a lot of extra task movement or extra lock contention when a BEAM thread gets descheduled by the OS to run a different BEAM thread that wants the same lock.

On most of the systems I ran, we didn't tend to have much of anything running on BEAMs dirty schedulers or other OS processes. If you have more of a mix of things, leaving things unpinned may work better.


Is your setup open source? I'd love to know more about upsides of erlang/otp on top of k8s. Do you use hot code reloads?


I am afraid this article will resurrect shortage even if it ended lately: I never cared about pasta shapes, but now I want bucatini. With 52 hacker news points there goes national stock of bucatini.


I'm not sure why this article is being published now (dated December 28, 2020) if the shortage was really in March. It doesn't seem like it's still an issue. Amazon has tons of bucatini for sale, many available in a day with Prime: https://www.amazon.com/s?k=bucatini


The curse-blessing of getting to the top of HN has jumped the digital-physical barrier.

Which is more likely:

Error 404: Bucatini not found.

MySQL Error: cannot get connection to deliver Bucatini.

/me shows self out


Bucatini has gone away


...or Http 420 / 429 slow down consumption requests for bucatini


We might repurpose "418 I'm a teapot", now with bucatini power.


303 See Other Pasta Shapes

307 Temporary Redirect of Bucatini Deliveries



People don't seem to take action based on these things. I was concerned about not being able to buy vitamin D because of so many articles positively correlating it with good COVID-19 outcomes... but had no trouble buying it recently.


You can begin with that recipe: [0] :)

[0] https://www.youtube.com/watch?v=8t6ddIzPy0k



If you have a wegmans in your area, they might have it. I found some last night actually :)


Wegmans is a no-go where I live, thanks to an old 'handshake agreement' between the Wegman and Golub families, in that there would be no overlap between Wegmans and Price Chopper territory. :(

I'll have to check Hannaford and Shoprite instead.


I really want some now, too. :/


Yeah, with the amount they talked about extra sauce absorption in the article, I now have a very strong desire to try it myself...


I found that comment kind of odd, since the real king of sauce absorption is spaghetti rigati - spaghetti with ridges.

Bucatini has good sauce absorption but imho its claim to fame is its thickness, mouthfeel, and rigidity, which the author hilariously and accurately refers to as "sentient". It has a mind of its own.

There are two similar noodle shapes, pici and Strozzapreti (literally "priest stranglers") but they are even harder to find.


Its like bitcoin investment. There won't be any more IPv4 :)


I dont understand how would such highly contagious virus suppress and contain itself if it started in CA months ago. As if Californians have no outbound travel. Impossible scenario.


I don’t think anyone is claiming that it suppressed and contained itself. They’re claiming it’s more widespread and more mild. There’s a good graphic in the economist article showing two different peaks that look the same early on.


Damn. If this becomes legal precedent will I be forced to read every page of books I buy?


And you'd damn well use both items of your Buy One, Get One Free offer. No sharing. No only using half of the second item.


Looks like Akamai error to me.



Here's a Video that explains how the Google Survey team uses perceptual visual diffs (dpxdt) in their deployments: https://www.youtube.com/watch?v=UMnZiTL0tUc


It is an option, in fact there are a lot of open source options. The subscription model we offer is one way to gain the ability without having to worry about the complications of hosting it.


What if I disable ntpd a minute before leap second is injected and restart it a minute later?


My guess is, the leap second will be inserted, just as scheduled. And you'll get the famous message in your kernel log (dmesg):

http://lxr.free-electrons.com/source/kernel/time/ntp.c?v=2.6...

    Clock: inserting leap second 23:59:60 UTC
If ntpd gets notified of an impeding leap second via its peers (or the connected radio clock, GPS receiver, ...) it will set (struct timex*)->status |= STA_INS via the timex syscall (which it also uses to steer/speed-up/slow-down the clock).

http://man7.org/linux/man-pages/man2/adjtimex.2.html

The stock linux kernel works such that if ntpd will have set the STA_INS flag via adjtimex some time before, the kernel will do the leap-second insertion at the end of the UTC day.

If you disable ntpd and it doesn't reset this flag (which I doubt it does, but you'd have to check), the kernel will insert the leap second on its own, even if ntpd is not running.

If you disable ntpd, and either ntpd on termination (which I doubt), or you via the adjtimex syscall, clear the STA_INS flag, then the kernel will not insert the leap second. After UTC midnight, the clock will then be one second off, and a restart of ntpd will slowly steer the clock back to correct time.

For playing with all of this, there's a adjtimex tool which can display and even change the timex values:

    ➜  sbin  ./adjtimex -pV | sed 's/^/    /'
             mode: 0
           offset: 0
        frequency: -344608
         maxerror: 16000000
         esterror: 16000000
           status: 64
    time_constant: 2
        precision: 1
        tolerance: 32768000
             tick: 10000
         raw time:  1432018768s 308111us = 1432018768.308111
     return value = 5


Thanks! that's good info. It helped me to find a good article from someone to devoted couple days on testing this method: http://syslog.me/2012/06/01/an-humble-attempt-to-work-around...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: