Kepler’s laws should still provide a pretty good estimate, at least until black holes get much closer. I did a quick back of the envelope calculation, and looks like they’ll be roughly 14k astronomical units, or 0.22 light years apart.
1604. One could say we are overdue. I’m not sure about dust or other obstacles blocking it, but based on brightness alone a supernova in our galaxy should be visible with naked eye.
1604? Sort of; SN 1987A was visible to the naked eye at 3rd magnitude. It was in the Large Magellanic Cloud, which is almost in our galaxy but not quite. 170k light years.
https://en.wikipedia.org/wiki/SN_1987A
It would be. Which is why any pair of orbiting bodies will eventually collide.
It’s just that for black holes this effect is insignificant (a merger would take much longer than the age of the Universe) until they get close to each other, much closer than 1 parsec.
The potential to detect Supermassive Black Hole mergers is one of the reasons I'm really excited about the LISA project [1], and hope it actually gets funded and doesn't delay too much.
In quantum mechanics, bosons are (often massless) force carrying particles like photons or gluons. Fermions are the massive matter particles, such as electrons or quarks.
So, while I’ve never heard this saying before, I assume it’s meaning is that massless particles like photons are best for carrying information around (rather than electrons we are using in circuits today), while the electrons are best for carrying state, like in a switch.
Note, that in networking we have already made that transition by using fiber optics, rather than electric wire to transfer information over longer distances.
I wonder what the longest information carrying electric wire is? They used to cross oceans, but not anymore. There are loads of DSL twisted pairs and co-ax cables in the "last mile" that maybe go 5km max. In rural areas maybe up to 50km with repeaters?
Is there something in between? Some old buried copper trunk cable between two university campuses or something like that?
If you have the ability to spin up a new machine when the old one fails, and deploy your app onto it in one minute, it’s not a big leap to also run your app on two machines and avoid that downtime altogether.
Running two instances of a stateful application in parallel forces you to consider nasty and hard problems such as CAP theorem, etc. If your requirements allow, it's much easier to have an active-standby architecture over active-active.
Most applications as a whole are absolutely stateful. Individual components of them might not be (app servers are stateless with the DB/Redis containing all state), but the whole app from an external client's perspective is stateful.
If we're talking about reliability/outage recovery, we're considering the application as one single unit visible from the external client's perspective - so everything including the DB (or equivalent stateful component) must be redundant.
Sadly this is also where a lot of cloud-native tooling and best practices fall short. There are endless ways to run stateless workloads redundantly, but stateful/CAP-bound workloads seem to be ignored/handwaved away.
I've seen my fair share of stacks that are doing the right thing when it comes to the easy/stateless parts (redundancy, infinite horizontal scalability), but everyone kinda ignores the elephant in the room which is the CAP-bound primary datastore that everything else depends on, which isn't horizontally scalable and its failover/replication behavior is ignored/misunderstood and untested, and they only get away with it because modern HW is reliable enough that its outage/failover windows are rare enough that the temporary misunderstood/unexpected/undefined behavior during those flies under the radar.
That’s a pretty pedantic interpretation of the word application. In the context of software owned by most teams, that they may decide to run on single vs multiple hosts most applications are absolutely stateless. Most applications outsource state to another system, like a relational database, a managed no-SQL store, or an object store.
And so no, most teams don’t need to worry about the hard problems you bring up.
Is it really an application if it’s not stateful? Maybe you’re managing the state client-side which makes it easier but I wouldn’t call a plain website an application, or am I missing something?
At the smallest level, even every byte of an in-flight HTTP request is still state. State, and for that matter "uptime" really depend on what the application/service ultimately does and what the agreement/SLA with the end-customer is.
The correct high-availability solution should take business requirements into account and there is no silver bullet. Running everything on a $5 VPS is no silver bullet, but neither is your typical "cloud-native" "best practice" stack that everyone keeps cargo-culting which often leads to unnecessary cost while leaving many hard questions (such as replicating CAP-bound stateful databases) unanswered.
I only know that from Planet Earth documentary, which was such a great show!