My dad grew up in the 50s & 60s. During COVID he purchased my daughters' the, I quote, "shittiest briefcase record players" he could find. Both girls listen to their music on their devices, but also buy vinyl. The other day, my eldest came down from her room complaining that her vinyl "sounded awful". I told her to bring it up with their Grampy. His response: "you can't appreciate good playback until you've heard awful playback on shitty record players like I had to.". My eldest is now plotting a complete hifi system, and is learning all about how to transfer "vinyl" to "digital" without losing the parts of the vinyl she likes.
Don't miss how this works. It's not a server-side application - this code runs entirely in your browser using SQLite compiled to WASM, but rather than fetching a full 22GB database it instead uses a clever hack that retrieves just "shards" of the SQLite database needed for the page you are viewing.
I watched it in the browser network panel and saw it fetch:
It's reminiscent of that brilliant SQLite.js VFS trick from a few years ago: https://github.com/phiresky/sql.js-httpvfs - only that one used HTTP range headers, this one uses sharded files instead.
That is fair, particularly compared to Janet Jackson! I will add detail.
In their younger days, two distinguished engineers, Bryan Cantrill and Brendan Gregg, made this video where they scream at a data storage server nicknamed Thumper. Screaming at it has surprising results, which are observed with a novel software technology called dtrace.
The Sun Fire X4500 was a dense storage sever, 4U with 48 disks and insane IO performance and a newish filesystem called ZFS. The video is not only funny in content, it features technology and technologists that became very impactful, hence the classic tag.
---
I love the lore, so I'll drop more.
While our team previously used AFS (mainly for its great caching) and many storage servers, this hardware combined with its software allowed us to consolidate and manage and access data in new ways, alleviating many of our market data analysis problems.
We switched to NFS, which previously was not performant enough for us on other hw/sw architectures. While using NFS with the Thumpers and then Thors (X4540) was fantastic, eventually the data scales became hard again and we made a distributed immutable filesystem that looked like the Hadoop HDFS and Cassandra file systems, named after our favorite Klingon Worf (Write-Once Read-Frequently).
Interestingly, in 2025 both XTX [1] and HRT [2] open-sourced their distributed file systems which are pretty similar to it, using 2020's tech rather than 2000's. HRT's is based on Meta's Tectonic which is a spiritual successor to Cassandra.
I wrote about our parallel HFT networking journey once upon a time on HN. [3]
A company adopts some software with a free but not copyleft license. Adopts means they declare "this is good, we will use it".
Developers help develop the software (free of charge) and the company says thank you very much for the free labour.
Company puts that software into everything it does, and pushes it into the infrastructure of everything it does.
Some machines run that software because an individual developer put it there, other machines run that software because a company put it there, some times by exerting some sort of power for it to end up there (for example, economic incentives to vendors, like android).
A some point the company says "you know what, we like this software so much that we're going to fork it, but the fork isn't going to be free or open source. It's going to be just ours, and we're not going to share the improvements we made"
But now that software is already running in a lot of machines.
Then the company says "we're going to tweak the software a bit, so that it's no longer inter-operable with the free version. You have to install our proprietary version, or you're locked out" (out of whatever we're discussing hypothetically. Could be a network, a standard, a protocol, etc).
Developers go "shit, I guess we need to run the proprietary version now. we lost control of it."
This is what happened e.g. with chrome. There's chromium, anyone can build it. But that's not chrome. And chrome is what everybody uses because google has lock-in power. Then google says "oh I'm going to disallow you running the extensions you like, so we can show you more ads". Then they make tweaks to chrome so that websites only get rendered well if they use certain APIs, so now competitors to Chrome are forced to implement those APIs, but those aren't public.
And all of this was initially build by free labour, which google took, by people who thought they were contributing to some commons in a sense.
Copyleft licenses protect against this. Part of the license says: if you use these licenses, and you make changes to the software, you have to share the changes as well, you can't keep them for yourself".
I'm taking a moment to recognize once more the work that user @atdrummond (Alex Thomas Drummond) did for a couple years to help others here. I did not know him, don’t think I ever interacted with him, and I did not benefit from his generosity, but I admired his kindness. Just beautiful.
If this had been available in 2010, Redis scripting would have been JavaScript and not Lua. Lua was chosen based on the implementation requirements, not on the language ones... (small, fast, ANSI-C). I appreciate certain ideas in Lua, and people love it, but I was never able to like Lua, because it departs from a more Algol-like syntax and semantics without good reasons, for my taste. This creates friction for newcomers. I love friction when it opens new useful ideas and abstractions that are worth it, if you learn SmallTalk or FORTH and for some time you are lost, it's part of how the languages are different. But I think for Lua this is not true enough: it feels like it departs from what people know without good reasons.
I'm the Manager of the Computing group at JILA at CU, where utcnist*.colorado.edu used to be housed. Those machines were, for years, consistently the highest bandwidth usage computers on campus.
Unfortunately, the HP cesium clock that backed the utcnist systems failed a few weeks ago, so they're offline. I believe the plan is to decommission those servers anyway - NIST doesn't even list them on the NTP status page anymore, and Judah Levine has retired (though he still comes in frequently). Judah told me in the past that the typical plan in this situation is that you reference a spare HP clock with the clock at NIST, then drive it over to JILA backed by some sort of battery and put it in the rack, then send in the broken one for refurb (~$20k-$40k; new box is closer to $75k). The same is true for the WWVB station, should its clocks fail.
There is fiber that connects NIST to CU (it's part of the BRAN - Boulder Research and Administration Network). Typically that's used when comparing some of the new clocks at JILA (like Jun Ye's strontium clock) to NIST's reference. Fun fact: Some years back the group was noticing loss due to the fiber couplers in various closets between JILA & NIST... so they went to the closets and directly spliced the fibers to each other. It's now one single strand of fiber between JILA & NIST Boulder.
That fiber wasn't connected to the clock that backed utcnist though. utcnist's clock was a commercial cesium clock box from HP that was also fed by GPS. This setup was not particularly sensitive to people being in the room or anything.
Another fun fact: utcnist3 was an FPGA developed in-house to respond to NTP traffic. Super cool project, though I didn't have anything to do with it, haha.
Agreed, which is why what GP suggests is much more sensible: it's venturing into known territory, except only one party of the conversation knows it, and the other literally cannot know it. It would be a fantastic way to earn fast intuition for what LLMs are capable of and not.
I wonder if you could query some of the ideas of Frege, Peano, Russell and see if it could through questioning get to some of the ideas of Goedel, Church and Turing - and get it to "vibe code" or more like "vibe math" some program in lambda calculus or something.
Playing with the science and technical ideas of the time would be amazing, like where you know some later physicist found some exception to a theory or something, and questioning the models assumptions - seeing how a model of that time may defend itself, etc.
I used to teach 19th-century history, and the responses definitely sound like a Victorian-era writer. And they of course sound like writing (books and periodicals etc) rather than "chat": as other responders allude to, the fine-tuning or RL process for making them good at conversation was presumably quite different from what is used for most chatbots, and they're leaning very heavily into the pre-training texts. We don't have any living Victorians to RLHF on: we just have what they wrote.
To go a little deeper on the idea of 19th-century "chat": I did a PhD on this period and yet I would be hard-pushed to tell you what actual 19th-century conversations were like. There are plenty of literary depictions of conversation from the 19th century of presumably varying levels of accuracy, but we don't really have great direct historical sources of everyday human conversations until sound recording technology got good in the 20th century. Even good 19th-century transcripts of actual human speech tend to be from formal things like court testimony or parliamentary speeches, not everyday interactions. The vast majority of human communication in the premodern past was the spoken word, and it's almost all invisible in the historical sources.
Anyway, this is a really interesting project, and I'm looking forward to trying the models out myself!
It is not just a way of writing ring buffers. It's a way of implementing concurrent non-blocking single-reader single-writer atomic ring buffers with only atomic load and store (and memory barriers).
The author says that non-power-of-two is not possible, but I'm pretty sure it is if you use a conditional instead of integer modulus.
I first learnt of this technique from Phil Burk, we've been using it in PortAudio forever. The technique is also widely known in FPGA/hardware circles, see:
"Simulation and Synthesis Techniques for Asynchronous
FIFO Design", Clifford E. Cummings, Sunburst Design, Inc.
Having worked at Mozilla a while ago, the CEO role is one I wouldn't wish on my worst enemy. Success is oddly defined: it's a non-profit (well, a for-profit owned by a non-profit) that needs to make a big profit in a short amount of time. And anything done to make that profit will annoy the community.
I hope Anthony leans into what makes Mozilla special. The past few years, Mozilla's business model has been to just meekly "us-too!" trends... IoT, Firefox OS, and more recently AI.
What Mozilla is good at, though, is taking complex things the average user doesn't really understand, and making it palpable and safe. They did this with web standards... nobody cared about web standards, but Mozilla focused on usability.
(Slide aside, it's not a coincidence the best CEO Mozilla ever had was a designer.)
I'm not an AI hater, but I don't think Mozilla can compete here. There's just too much good stuff already, and it's not the type of thing Mozilla will shine with.
Instead, if I were CEO, I'd go the opposite way: I'd focus on privacy. Not AI privacy, but privacy in general. Buy a really great email provider, and start to own "identity on the internet". As there's more bots and less privacy, identity is going to be incredibly important over the years.. and right now, Google defacto owns identity. Make it free, but also give people a way to pay.
Would this work? I don't know. But like I said, it's not a job I envy.
As the first author of the salmon paper, yes, this was exactly our point. fMRI can be an amazing tool, but if you are going to trust the results you need to have proper statistical corrections along the way.
As the first author on the salmon paper, yes, that was exactly our point. Researchers were capitalizing on chance in many cases as they failed to do effective corrections to the multiple comparisons problem. We argued with the dead fish that they should.
There you can download it in high quality, and it’s a pay-what-you-want: you can get it for free if you want, or pay what you feel like and support me. Either way, I’m happy that you enjoy it!
The music should also be on Spotify, Apple Music, and most music streaming services within the next 24h.
A bit about the process of scoring Size of Life:
I’ve worked with Neal before on a couple of his other games, including Absurd Trolley Problems, so we were used to working together (and with his producer—you’re awesome, Liz!). When Neal told me about Size of Life, we had an inspiring conversation about how the music could make the players feel.
The core idea was that it should enhance that feeling of wondrous discovery, but subtly, without taking the attention away from the beautiful illustrations.
I also thought it should reflect the organisms' increasing size—as some of you pointed out, the music grows with them. I think of it as a single instrument that builds upon itself, like the cells in an increasingly complex organism. So I composed 12 layers that loop indefinitely—as you progress, each layer is added, and as you go back, they’re subtracted. The effect is most clear if you get to the end and then return to the smaller organisms!
Since the game has an encyclopedia vibe to it, I proposed to go with a string instrument to give it a subtle “Enlightenment-era” and “cultural” feel. I was suspecting the cello could be a good instrument because of its range and expressivity.
Coincidentally, the next week I met the cellist Iratxe Ibaibarriaga at a game conference in Barcelona, where I’m based, and she immediately became the ideal person for it. She’s done a wonderful job bringing a ton of expressivity to the playing, and it’s been a delight to work with her.
I got very excited when Neal told me he was making an educational game—I come from a family of school teachers. I’ve been scoring games for over 10 years, but this is the first educational game I’ve scored.
In a way, now the circle feels complete!
(if anyone wants to reach out, feel free to do so! You can find me and all my stuff here: https://www.aleixramon.com/ )
The odd thing about all of this (well, I guess it's not odd, just ironic), is that when Google AdWords started, one of the notable things about it was that anyone could start serving or buying ads. You just needed a credit-card. I think that bought Google a lot of credibility (along with the ads being text-only) as they entered an already disreputable space: ordinary users and small businesses felt they were getting the same treatment as more faceless, distant big businesses.
I have a friend that says Google's decline came when they bought DoubleClick in 2008 and suffered a reverse-takeover: their customers shifted from being Internet users and became other, matchingly-sized corporations.
One thing this really highlights to me is how often the "boring" takes end up being the most accurate. The provocative, high-energy threads are usually the ones that age the worst.
If an LLM were acting as a kind of historian revisiting today’s debates with future context, I’d bet it would see the same pattern again and again: the sober, incremental claims quietly hold up, while the hyperconfident ones collapse.
Something like "Lithium-ion battery pack prices fall to $108/kWh" is classic cost-curve progress. Boring, steady, and historically extremely reliable over long horizons. Probably one of the most likely headlines today to age correctly, even if it gets little attention.
On the flip side, stuff like "New benchmark shows top LLMs struggle in real mental health care" feels like high-risk framing. Benchmarks rotate constantly, and “struggle” headlines almost always age badly as models jump whole generations.
I bet theres many "boring but right" takes we overlook today and I wondr if there's a practical way to surface them before hindsight does
I downloaded the original article page, had claude extract the submission info to json, then wrote a script (by hand ;) to run feed each submission title to gemini-3-pro and ask it for an article webpage and then for a random number of comments.
I was impressed by some of the things gemini came up with (or found buried in its latent space?). Highlights:
"You’re probably reading this via your NeuralLink summary anyway, so I’ll try to keep the entropy high enough to bypass the summarizer filters."
"This submission has been flagged by the Auto-Reviewer v7.0 due to high similarity with "Running DOOM on a Mitochondria" (2034)."
"Zig v1.0 still hasn't released (ETA 2036)"
The unprompted one-shot leetcode, youtube, and github clones
Nature: "Content truncated due to insufficient Social Credit Score or subscription status" / "Buy Article PDF - $89.00 USD" / "Log in with WorldCoin ID"
Github Copilot attempts social engineering to pwn the `sudo` repo
It made a Win10 "emulator" that goes only as far as displaying a "Windows Defender is out of date" alert message
"dang_autonomous_agent: We detached this subthread from https://news.ycombinator.com/item?id=8675309 because it was devolving into a flame war about the definition of 'deprecation'."
I was there today. We happened to notice the smoke over Kilauea while driving to Hilo, then checked out USGS cams, and immediately drove there and spent the next 7 hours getting mesmerized.
As my first eruption encounter, I didn’t expect to experience several things like the heat even from a long distance, enough to keep me warm in my shorts at 60F, and the loud rumble, like a giant waterfall. The flow of lava was way faster than I expected too, almost like oil.
Random nerd note: The history is slightly wrong. Netscape had their own "interactive script" language at the time Sun started talking about Java and somehow got the front page of the Mercury news when they announced it in March of 1995. At the Third International World Wide Web Conference in Darmstadt Germany everyone was talking about it and I was roped into giving a session on it during lunch break (which then had to be stopped because no one was going to the keynote by SGI :-)). Everyone one there was excited and saying "forget everything, this is the future." So, Netscape wanted to incorporate it into Netscape Navigator (their browser) but they had a small problem which was that this was kind of a competitor to their own scripting language. They wanted to call it JavaScript to ride the coattails of the Java excitement and Sun legal only agreed to let them do that if they would promise to ship Java in their browser when it hit 1.0 (which it did in September of that year).
So Netscape got visibility for their language, Sun got the #1 browser to ship their language and they had leverage over Microsoft to extortionately license it for Internet Explorer. There were debates among the Java team about whether or not this was a "good" thing or not, I mean for Sun sure, but the confusion between what was "Java" was not. The politics won of course, and when they refused to let the standards organization use the name "JavaScript" the term ECMAScript was created.
So there's that. But how we got here isn't particularly germane to the argument that yes, we should all be able to call it the same thing.
This response turned into more of an essay in general, and not specifically a response to your post, marginalia_nu. :)
Sharing information, to me, was what made things so great in the hacker culture of the 80s and 90s. Just people helping people explore and no expectation of anything in return. What could you possibly want for? There was tons of great information[1] all around everywhere you turned.
I'm disappointed by how so much of the web has become commercialized. Not that I'm against capitalism or advertising (on principle) or making money; I've done all those, myself. But while great information used to be a high percentage of the information available, now it's a tiny slice of signal in the chaff--when people care more about making money on content than sharing content, the results are subpar.
So I love the small internet movement. I love hanging out on a few Usenet groups (now that Google has fucked off). I love neocities. And I LOVE just having my own webpage where I can do my part and share some information that people find entertaining or helpful.
There's that gap from being clueless to having the light bulb turn on. (I've been learning Rust on and off and, believe me, I've opened plenty of doors to dark rooms, and in most of those I have not yet found the light switch.) And I love the challenge of finding helpful ways to bridge that gap. "If only they'd said X to begin with!" marks what I'm looking for.
I'm not always correct (I challenge anyone to write 5000 words on computing with no errors, let alone 750,000) or as clear as I could be, but I think that's OK. Anyone aspiring to write helpful information and put it online should just go for it! People will correct you if you're wrong[2] :) and you'll learn a *ton*. And your readers will learn something. And you'll have made the small web a slightly larger place, giving us more freedom to ignore the large web.
[1] When I say "great information", I don't necessarily mean "high quality". But the intention was there, and I feel that makes the difference.
[2] It can be really embarrassing to put bad information out there (for me, anyway). I don't want people to find out I don't know something and think less of me. But that's really illogical--I don't even personally know my critics! And here's the thing: when the critics are right (and they're often right!), you can go fix your material. And then it becomes more correct. After a short time of fixing mistakes critics point out, you get on the long tail of errors, and these are things that people are a lot less judgmental about. The short of it is, do the best you can, put your writing out there, correct errors as they are reported or as you find them, and repeat. I cannot stress how grateful I am to everyone who has helped me improve my guides, whether mean-spirited or not, because it's helped me and so many others learn the right thing.
An interesting fact is that while almost all of the Solar System has started as gas, which has then condensed here into solid bodies that have then aggregated into planets, a small part of the original matter of the Solar System has consisted of solid dust particles that have come as such from the stellar explosions that have propelled them.
So we can identify in meteorites or on the surface of other bodies not affected by weather, like the Moon or asteroids, small mineral grains that are true stardust, i.e. interstellar grains that have remained unchanged since long before the formation of the Earth and of the Solar System.
We can identify such grains by their abnormal isotopic composition, in comparison with the matter of the Solar System. While many such interstellar grains should be just silicates, those are hard to extract from the rocks formed here, which are similar chemically.
Because of that, the interstellar grains that are best known are those which come from stellar systems that chemically are unlike the Solar System. In most stellar systems, there is more oxygen than carbon and those stellar systems are like ours, with planets having iron cores covered by mantles and crusts made of silicates, covered then by a layer of ice.
In the other kind of stellar systems, there is more carbon than oxygen and there the planets would be formed from minerals that are very rare on Earth, i.e. mainly from silicon carbide and various metallic carbides and also with great amounts of graphite and diamonds.
So most of the interstellar grains (i.e. true stardust) that have been identified and studied are grains of silicon carbide, graphite, diamond or titanium carbide, which are easy to extract from the silicates formed in the Solar System.
Thanks HN! I regularly open HN during lectures. There is no better way to show my students what software engineering entails and why I focus on certain topics.
Is SCRUM really as great as its evangelists claim? Let's read HN comments.
What are good use cases for UML? Let's check out HN.
Does anyone actually care about CoCoMo or CMMI? Let's read ... oh - nearly nobody's talking about it there. Maybe it won't be that relevant to the students.
Here's a crazy idea. I personally prefer the fidelity of an active ambient in-ear monitor (IEM), as used by musicians on stage over the best hearing aids. Once a year, I do a monthly trial with the latest hearing aid models and IMO the fidelity (especially low-end) and the comfort just is not there compared with the best active ambient IEMs. The difference between hearing aids and IEMs is blurring, but they are not yet fully interchangeable.
Standard IEMs isolate you from the world, which is the opposite of what a hearing aid does. However, a specific category called "Active Ambient" IEMs bridges this gap. These are IEMs with embedded high-fidelity microphones on the outer shell. They pick up the sound of the room (bandmates, crowd, conductor), amplify it, and blend it with your monitor mix. The accompanying bodypack or app often includes a multi-band EQ and Limiter. You can boost specific frequencies where you have hearing loss (e.g., boosting highs to hear cymbals or speech clearly) and set a volume ceiling to protect your remaining hearing. I have no ownership/sponsorship in the product, but I personally LOVE the ASI Audio 3DME (powered by Sensaphonics), which is the industry standard for this. [1] It allows you to use an app to shape the ambient sound to your hearing needs.
The Pros: It provides hearing protection + monitoring + hearing enhancement in one device.
The Cons (Why they aren't daily hearing aids):
1) Form Factor: You are tethered to a belt pack. You likely won't wear a wired bodypack to a grocery store or dinner party.
2) Social Barrier: Wearing full-shell custom IEMs creates a "do not disturb" look that discourages conversation in social settings. This can be more socially alienating than a comparatively inconspicuous hearing aid.
3) Battery Life: IEM systems typically last 6–8 hours, whereas hearing aid batteries can last days or weeks.
Well… we have a culture of transparency we take seriously. I spent 3 years in law school that many times over my career have seemed like wastes but days like today prove useful. I was in the triage video bridge call nearly the whole time. Spent some time after we got things under control talking to customers. Then went home. I’m currently in Lisbon at our EUHQ. I texted John Graham-Cumming, our former CTO and current Board member whose clarity of writing I’ve always admired. He came over. Brought his son (“to show that work isn’t always fun”). Our Chief Legal Officer (Doug) happened to be in town. He came over too. The team had put together a technical doc with all the details. A tick-tock of what had happened and when. I locked myself on a balcony and started writing the intro and conclusion in my trusty BBEdit text editor. John started working on the technical middle. Doug provided edits here and there on places we weren’t clear. At some point John ordered sushi but from a place with limited delivery selection options, and I’m allergic to shellfish, so I ordered a burrito. The team continued to flesh out what happened. As we’d write we’d discover questions: how could a database permission change impact query results? Why were we making a permission change in the first place? We asked in the Google Doc. Answers came back. A few hours ago we declared it done. I read it top-to-bottom out loud for Doug, John, and John’s son. None of us were happy — we were embarrassed by what had happened — but we declared it true and accurate. I sent a draft to Michelle, who’s in SF. The technical teams gave it a once over. Our social media team staged it to our blog. I texted John to see if he wanted to post it to HN. He didn’t reply after a few minutes so I did. That was the process.
Hey, guy who made this here. This probably deserves a little explanation. First off, I'd like to tell you I'm really, really unemployed, and have the freedom to do some cool stuff. So I came up with a project idea. This is only a small part of a project I'm working on, but you'll see where this is going.
I was inspired by this video: https://www.youtube.com/watch?v=HRfbQJ6FdF0 from bitluni that's a cluster of $0.10-0.20 RISC-V microcontrollers. For ten or twenty cents, these have a lot of GPIOs compared to other extremely low-cost microcontrollers. 18 GPIOs on the CH32V006F4U6. This got me thinking, what if I built a cluster of these chips. Basically re-doing bitluni's build.
But then I started thinking, at ten cents a chip, you could scale this to thousands. But how do you connect them? That problem was already solved in the 80s, with the Connection Machine. The basic idea here is to get 2^(whatever) chips, and connect them so each chip connects to (whatever) many other chips. The Connection Machine sold this as a hypercube, but it's better described as a hamming-distance-one graph or something.
So I started building that. I did the LEDs first, just to get a handle on thousands of parts: https://x.com/ViolenceWorks/status/1987596162954903808 and started laying out the 'cards' of this thing. With a 'hypecube topology' you can split up the cube into different parts, so this thing is made of sixteen cards (2^4), with 256 chips on each card (2^8), meaning 4096 (2^12) chips in total. This requires a backplane. A huge backplane with 8196 nets. Non-trivial stuff.
So the real stumbling block for this project is the backplane, and this is basically the only way I could figure out how to build it; write an autorouter. It's a fun project that really couldn't have been done before the launch of KiCad 9; the new IPC API was a necessity to make this a reality. After that it's just some CuPy because of sparse matrices and a few blockers trying to adapt PathFinder to circuit boards.
Last week I finished up the 'cloud routing' functionality and was able to run this on an A100 80GB instance on Vast.io; the board wouldn't fit in my 16GB 5080 I used for testing. That instance took 41 hours to route the board, and now I have the result back on my main battlestation ready for the bit of hand routing that's still needed. No, it's not perfect, but it's an autorouter. It's never going to be perfect.
This was a fun project but what I really should have been doing the past three months or so is grinding leetcode. It's hard out there, and given that I've been rejected from every technician job I've applied to, I don't think this project is going to help me. Either way, this project.... is not useful. There's probably a dozen engineers out there in the world that this _could_ help.
So, while it's working for my weird project, this is really not what hiring managers want to see.
I was a tube amp tech for several years, have built multiple guitar amps from scratch, and still dabble in it.
What may not be obvious is that modern tube amp designs are an evolutionary branch from 1930's technology, with only a little coming across from the transistor->digital tech tree. The amps of the 40s and 50s were pretty closely based on reference designs that came from RCA and other tube manufacturers.
Modern passive components (resistors, diodes and caps) are made to a far higher tolerance and are better understood, but tubes and transformers are a mixed bag. The older designs were somewhat overbuilt and can be more reliable or have tonal characteristics that are not available in modern parts.
Back when I was in Uni, so late 80s or early 90s, my dad was Project Manager on an Air Force project for a new F-111 flight simulator, when Australia upgraded the avionics on their F-111 fighter/bombers.
The sim cockpit had a spherical dome screen and a pair of Silicon Graphics Reality Engines. One of them projected an image across the entire screen at a relatively low resolution. The other projector was on a turret that pan/tilted with the pilot's helmet, and projected a high resolution image but only in a perhaps 1.5m circle directly in from of where the helmet was aimed.
It was super fun being the project manager's kid, and getting to "play with it" on weekends sometimes. You could see what was happening while wearing the helmet and sitting in the seat if you tried - mostly ny intentionally pointing your eyes in a different direction to your head - but when you were "flying around" it was totally believable, and it _looked_ like everything was high resolution. It was also fun watching other people fly it, and being able to see where they were looking, and where they weren't looking and the enemy was speaking up on them.
This was a 5 year play by my dad. Shout out.