Many of the merge lanes in California are insanely short compared to those in the rest of the world. The worst are the ones that have merge immediately before an overpass and exit immediately after where merging and exiting have about the width of the overpass to change lanes. I found those infuriating when I used to visit friends in the Bay area. The pattern where I live is the opposite (long exit lane before the overpass and a long merge lane after) and provides far better margins of safety.
It's plausible that the AI companies have given up storing data for training runs and just stream it off the Internet directly now. It's probably cheaper to stream than buying more SSDs and HDDs from a supply constrained supply chain at this point.
Given the tectonic shift in priorities for Linux kernel development over the past decade, I'm willing to bet that many key developers would be open to a microkernel architecture now than ~25+ years ago. CPUs now have hardware features that reduce the overhead of MMU context changes which gets rid of a significant part of the cost of having isolated address spaces to contain code. The Meltdown and Spectre attacks really forced the security issue to the point where major performance costs to improve security became acceptable in a way that was not the case in the '90s or '00s.
Please read the article in full. The GPU die where all the computations occur and the majority of power is spent will remain on TSMC.
TSMC plans their A14 process to be in high volume production in 2028. It will include backside power delivery introduced in their A14 process (expected 2026/2027 high volume production), which means it will be quite competitive with Intel.
"The GPU die will remain with TSMC, but portions of the I/O die are expected to leverage Intel's 18A or the planned 14A process slated for 2028, contingent on yield improvements."
Reading between the lines: Nvidia will most likely design a TSMC version of those I/O die portions in case Intel fails.
Intel has a decades long reputation of failing its attempted foundry customers. Whether or not Nvidia's ownership stake is sufficient to overcome the inertia within Intel that has resulted in those failures remains to be seen.
Thanks for publishing your blog! The articles are quite enlightening, and it's interesting to see how semiconductors evolved in the '70s, '80s and '90s. Having grown up in this time, I feel it was a great time to learn as one could understand an entire computer, but details like this were completely inaccessible back then. Keep up the good work knowing that it is appreciated!
A more personal question: is your reverse engineering work just a hobby or is it tied in with your day to day work?
Thanks! The reverse engineering is just for fun. I have a software background, so I'm entirely unqualified to be doing this :-) But I figure that if I'm a programmer, I should know how computers really work.
2.4 GHz is unreliable for me these days due to interference from bluetooth headphones and hearing aids that other people are using. The issues tend to only show up during extended periods of video streaming, and having looked at a bunch of traffic captures over the holidays, it seems to be limited to certain streaming services sending very large bursts of traffic at extremely high rates (likely from servers with 100+ Gbps interfaces using TSO to reduce CPU usage). That makes me think that the regularly paced bluetooth interference from real time audio streams limits the maximum viable burst size of a 2.4 GHz wifi radio.
Yes, this happened a bunch more over the Christmas holiday when we had an extra 3 or 4 younger family members all listening to music and videos over their bluetooth ear buds and headphones, which made it much easier to track down as it was quite a rare intermittent failure with only a single bluetooth device being active.
Security for maps is basically impossible. Maps tend to have to be widely shared within government and engineering, and if you know what you're looking for, it's remarkably straightforward to find ways to access layers you would normally have to pay for. It's a consequence of the need to share data widely for a variety of purposes -- everything from zoning debates within a local county to maps for broadband funding across an entire country create a public need to share mapping information. Keys don't get revoked once projects end as that would result in all the previously published links becoming stale, which makes life harder for everyone doing research and planning new projects.
Moreover, university students in programs like architecture are given access to many map layers as part of the school's agreements with the organizations publishing the data. Without that access, students wouldn't be able to pick up the skills needed to do the work they will eventually be hired for. And if students can get data, then it's pretty much public.
Privacy is becoming (or already is) nearly impossible in the 21st century.
I basically asked my math and physics teachers in high school what the Fourier transform was, but none of them knew how to answer my questions (which were about digital signal processing -- modems were important things to us back in the early '90s). If I had to do it over again, I would have audited the local university's electrical engineering and math courses in evenings. The first time MIT ran 6002x online back in 2012, the course finally answered a lot of those questions when touching upon filters and bandwidth.
Yeah I wish I had known about or had access to that stuff when I was a kid. To really learn and internalize ideas like negative frequency early would have been quite fun.
reply