Hacker Newsnew | past | comments | ask | show | jobs | submit | jabl's commentslogin

I know nothing about IT project management for healthcare, but just the other day over here in the local news there was a mention that the all-singing-all-dancing healthcare application that the region (with ~1M inhabitants) has been spending years and around 800 million euros to get into production has been so poorly received that they're considering starting over from scratch. I'm so happy seeing my tax money well spent...

This is an implementation of something called MUMPS, which is apparently some US system that is very arcane but widely used.

Again, I'm not an expert on this topic, but it indeed seems like standards, API's, file formats and whatnot would be keys to a system where decoupled components can be evolved step-by-step over time instead of the current system which seems to be a humongous monolith.


Which sounds crazy to me considering how much involvement the US has with FHIR.

http://hl7.org/fhir/

Even if you don't care about this stuff, FHIR is definitely worth investigating.


OoO is a surprisingly old idea, first used in the IBM System/360 Model 91 released all the way back in 1966.

https://en.wikipedia.org/wiki/Tomasulo's_algorithm

Took a while until transistor budgets allowed it to be implemented in consumer microprocessors.


Also for the gap between CPU speed and memory speed to matter enough for it to be worthwhile.

If you read more carefully it says fsync needs some enhancements to the futex API, called futex2. The original patch that fsync needed called the syscall futex_wait_multiple. Eventually futex2 made it into the mainline kernel, but the syscall is called futex_waitv. Not sure if the wine fsync implementation was updated to support the mainline kernel futex2 implementation.

x86 has decades of knowhow and a zillion transistors to spend on making the memory pipeline, TLB caching & prefetching etc. etc. really really good. They work as well as they do despite the 4k base page size, not because of it.

If you'd start from a clean sheet today you'd probably end up with a somewhat bigger base page size. Not hugely larger though, as that wastes a lot of memory for most applications. Maybe 16k like some ARM chips use?


Not sure about this argument, do you have any references?

In a LWR, if the coolant/moderator boils away, sure, the reactivity goes down. But there is plenty enough decay heat left to melt all the fuel that can then flow into a puddle of suitable geometry and go boom. Hypothetically speaking, at least.

I suppose in practice most LWR's use lightly enriched fuel so it's very hard to get enough material close enough together to make it critical, let alone supercritical, without a moderator of some sort. Of course, plenty of research reactors, naval reactors etc. have operated with very highly enriched fuel (90+%?), but even these have AFAIU so far managed without accidentally turning themselves into nuclear bombs.

Seems most contemporary civilian fast reactor designs are designed to operate with HALEU fuel, where the limit is (somewhat arbitrarily) set at 20%. A lot higher enrichment than your typical LWR, but still much lower than you see in weapons, and you still need quite a lot of it before it can go boom.


It's straightforward. Consider what would happen (for example) if all the fuel in a reactor is compressed into a more compact configuration.

In a thermal reactor, there's no problem, as there's now no moderator. There was massive rearrangement and compaction of melted fuel at the TMI accident, but criticality was not going to be a serious issue for the fundamental reasons I gave above.

In a fast reactor? It can only become more reactive. Anything else there was only absorbing neutrons, not helping, and the geometric change reduces neutron leakage.

Edward Teller somewhat famously warned about the issue in 1967, in a trade magazine named "Nuclear News":

“For the fast breeder to work in its steady state breeding condition, you probably need half a ton of plutonium. In order that it should work economically in a sufficiently big power producing unit, it probably needs more than one ton of plutonium. I do not like the hazard involved. I suggested that nuclear reactors are a blessing because they are clean. They are clean as long as they function as planned, but if they malfunction in a massive manner, which can happen in principle, they can release enough fission products to kill a tremendous number of people.

… But if you put together two tons of plutonium in a breeder, one tenth of one percent of this material could become critical. I have listened to hundreds of analyses of what course a nuclear accident could take. Although I believe it is possible to analyze the immediate consequences of an accident, I do not believe it is possible to analyze and foresee the secondary consequences. In an accident involving plutonium, a couple of tons of plutonium can melt. I don’t think anyone can foresee where one or two or five percent of this plutonium will find itself and how it will get mixed with other material. A small fraction of the original charge can become a great hazard."

(Natrium is not a breeder but the same argument holds.)

That no fast reactors have yet exploded is of course no great argument. How many fast reactors have been built, particularly large ones? Not many. And we've already seen a commercial fast reactor suffer fuel melting (Fermi 1).


There have been some sodium cooled designs that have used a closed cycle gas turbine using nitrogen as the working fluid for the secondary circuit, in order to avoid any issues with sodium-water reactions with a traditional steam Rankine secondary circuit.

There are also fast reactor designs using lead as the coolant rather than sodium. These are interesting, but less mature than sodium cooling. Sodium is better from a cooling and pumping perspective though.


Lead-bismuth eutectic.

A eutectic is an alloy that has a lower melting point than any of its components.

Lead-bismuth eutectic or LBE is a eutectic alloy of lead (44.5 at%) and bismuth (55.5 at%) used as a coolant in some nuclear reactors, and is a proposed coolant for the lead-cooled fast reactor, part of the Generation IV reactor initiative. It has a melting point of 123.5 °C/254.3 °F (pure lead melts at 327 °C/621 °F, pure bismuth at 271 °C/520 °F) and a boiling point of 1,670 °C/3,038 °F.

https://en.wikipedia.org/wiki/Lead-bismuth_eutectic


Bismuth leads to the production of polonium, which is extraordinarily dangerous.


Yes, some lead cooled reactor designs have used LBE, others pure lead. Though AFAIU so far the only lead cooled reactors that have actually been built and operated in production have used LBE. There is a pure lead cooled reactor under construction that should be started up in a few years if the current schedule holds.


I have a small benchmark program doing tight binding calculations of carbon nanostructures that I have implemented in C++ with Eigen, C++ with Armadillo, Fortran, Python/numpy, and Julia. It's been a while since I've tested it but IIRC all the other implementations were about on par, except for python which was about half the speed of the others. Haven't tried with numba.

To bring Julia performance on par with the compiled languages I had to do a little bit of profiling and tweaking using @views.

https://gitlab.com/jabl/tb


I don't think the situation is that comparable to python, since in python the library has to be present at runtime. And with the dysfunctional python packaging there's potentially a lot of grey hairs saved by not requiring anything beyond the stdlib.

With Rust, it's an issue at compile-time only. You can then copy the binary around without having to worry about which crates were needed to build it.

Of course, there is the question of trust and discoverability. Maybe Rust would be served by a larger stdlib, or some other mechanism of saying this is a collection of high-quality well maintained libraries, prefer these if applicable. Perhaps the thing the blog post author hints at would be a solution without having to bundle everything into the stdlib, we'll see.

But I'd be somewhat vary of shoveling a lot of stuff into stdlib, it's very hard to get rid of deprecated functionality. E.g. how many command-line argument parsers are there in the python stdlib? 3?


> You can look to Swift for prior art on how this can be done: https://faultlore.com/blah/swift-abi/

> It would be very hard to accomplish.

Since Rust cares very much about zero-overhead abstractions and performance, I would guess if something like this were to be implemented, it would have to be via some optional (crate/module/function?) attributes, and the default would remain the existing monomorphization style of code generation.


Swift’s approach still monomorphizes within a binary, and only has runtime costs when calling code across a dylib boundary. I think rust could do something like this as well.


If Rust and static linking were to become much more popular, Linux distros could adopt some rsync/zsync like binary diff protocol for updates instead of pulling entire packages from scratch.


Even then, they would still need to rebuild massive amounts on updates. That is nice in theory, but see the number of bugs reported in Debian because upstream projects fail to rebuild as expected. "I don't have the exact micro version of this dependency I'm expecting" is one common reason, but there are many others. It's a pretty regular thing, and therefore would be burdensome to distro maintainers."


Static linking used to be popular, as it was the only way of linking in most computer systems, outside expensive hardware like Xerox workstations, Lisp machines, ETHZ, or what have you.

One of the very first consumer hardware to support dynamic linking was the Amiga, with its Libraries and DataTypes.

We moved away from having a full blown OS done with static linking, with exception of embedded deployments and firmware, for many reasons.


Yeah I'm not really convinced that this matters at all tbh


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: