Hacker Newsnew | past | comments | ask | show | jobs | submit | voidmain0001's commentslogin

From work, I have a Thinkpad X1 gen 13 and it's awesome. Super lightweight, and great power. But, when I tried Linux a few months ago its hardware was still too bleeding edge. Things may be better with kernel v7 on the way. I like the Gram as a personal device so may I know what model Gram you have?

That laptop has an 8th gen Intel processor which should make it completely compatible with the Linux kernel, yet surprisingly it’s not. https://linux-hardware.org/?probe=2ec391ffdc Did Fujitsu choose an obscure component or interface?


Even on random ARM boards, it's not usually the CPU that's the problem. (It's generally drivers for everything else; eg. a sensor hub that should tell you when a laptop is in tablet mode)


Yeah, my implication was that the 8th gen CPU's platform controller hub should be supported. I should have explicitly rather than implicitly stated that.



The C1 Ultra looks really powerful. 128 kb L1D cache on it's own is a ~10% IPC improvement that should let it pull firmly ahead of the x86 competition which is very stuck at 32kb due to the legacy 4k page size.


I'm sorry, I'm clearly missing something but why would page size impact L1 cache size?


When you do a cache lookup, there is a "tag" which you use as an index during lookup. But once you do the lookup, you may need to walk a few entries in the corresponding "bucket" (identified by that tag) to find the matching cache line. The number of entries you walk is the associativity of the cache e.g. 8-way or 12-way associativity means there are 8 or 12 entries in that bucket. The larger the associativity, the larger the cache, but also it worsens latency, as you have to walk through the bucket. These are the two points you can trade off: do you want more total buckets, or do you want each bucket to have more entries?

To do this lookup in the first place, you pull a number of bits from the virtual/physical address you're looking up, which tells you what bucket to start at. The minimum page size determines how many bits you can use from these addresses to refer to unique buckets. If you don't have a lot of bits, then you can't count very high (6 bits = 2^6 = 64 buckets) -- so to increase the size of the cache, you need to instead increase the associativity, which makes latency worse. For L1 cache, you basically never want to make latency worse, so you are practically capped here.

Platforms like Apple Silicon instead set the minimum page size to 16k, so you get more bits to count buckets (8 bits = 256 buckets). Thus you can increase the size of the cache while keeping associativity low; L1 cache on Apple Silicon is something crazy like 192kb, and L2 (for the same reasons) is +16MB. x86 machines and software, for legacy reasons, are very much tied to 4k page size, which puts something of a practical limit on the size of their downstream caches.

Look up "Virtually Indexed, Physically Tagged" (VIPT) caches for more info if you want it.


It’s not a hard limit, especially if you aren’t pushing the frequency wall like Intel. AMD used to use a 2-way 64kb L1, Intel has an 8-way 64kb L1i on Gracemont, and more to the point, high-end ARM Cortex has had 4-way 64kb L1 caches since before they even supported 16kb pages.


Yeah, I was more just trying to paint a broad picture. Nvidia in particular I think had fast and large-ish L1 on Tegra (X2?) despite being tied to 4k pages.


This the the most cursed part of modern cpu design, but the TLDR is that programs use virtual addresses while CPUs use physical addresses which means that CPU caches need to include the translation from virtual to physical adress. The problem is that for L1 cache, the latency requirement of 3-4 cycles is too strict to first do a TLB lookup and then an L1 cache lookup, so the L1 can only be keyed on the bits of ram which are identical between physical and virtual addresses. With a 4k page size, you only have 6 bits between the size of your cache line (64 bytes) and the size of your page, which means that at an 8 way associative L1D, you only get 64 buckets*64 bytes/bucket=32 kbits of L1 cache. If you want to increase that while keeping the 4k page size, you need to up the associativity, but that has massive power draw and area costs, which is why on x86, L1D on x86 hasn't increased since core 2 duo in 2006.


Can you not take some of those virtual bits and get more buckets that way? I am sure it will make things more complicated if nothing else by them possibly being mapped to the same physical page, but it doesn't sound like an impossible barrier. Maybe something terrible where a cache line keeps bouncing between different buckets in the rare case that does happen, but as long as you can keep the common case as fast...

Otoh L1 sizes hasn't increased since my first processor, those CPU designers probably know more than I do.


that will break if any page is mapped at two VAs, you'll end up with conflicting cache lines for the same page...


The L2 already keeps track of what lines are somewhere in L1's for managing coherency.

Divide the cache into "meta-caches" indexed by the virtual bits and treat them as separate from the L2's point of view. Duplicate the data and if somebody writes back invalidate all the other copies. The hardware already exists for doing this on any multicore system. Sure, you will end up duplicating data sometimes and it will actually be slower if you're actually writing to aliased locations. But is this happening often enough to be a problem compared to generally having a bigger cache?

It sounds to me like an engineering tradeoff that might or might not make sense, not a hard limit which at least was what I think was being asserted. But as I also said, L1 sizes hasn't increased in a while and smart people are working on it, so there is probably something I don't know.


this "divide" thing will add latency which you really do not want to add to L1 hits


Nice HN explanation! One hopes we will not be living with 4kb pages forever, and perhaps L1 performance will be one more reason.


I'd really hope we do live with 4kb pages forever. Variable page size would make many remapping optimizations (i. e. continuous ring buffers) much harder to do, so we would need more abstraction layers, and more abstraction layers will eat away all the performance gains while also making everything more fragile and harder to understand. Hardware people really love those "performance hacks" that make live a more painful for the upper layers in exchange for a few 0.1%s of speed. You could also probably gain some speed by dropping byte access and saying the minimal addressable unit is now 32 bits. Please don't. If you need larger L1 cache - just increase associativity.


The extra L1 cache from a 64k page is on it's own a ~5-10% perf improvement (and it decreases power use by reducing the number of times you go out to L2.


Funny, most of what you described sums up the Alpha architecture. 8KB pages + huge pages and, initially, only word-addressable memory, no byte access.

(Of course, it only took a few years for this to be rectified with the byte-word extension, which became required by ~all "real software" that supported Alpha)

It's also one of the only architectures Windows NT supported that didn't have 4KB pages, along with Itanium. I've wondered how (or if?) it handled programs that expect 4KB pages, especially in the x86 translation subsystem.


Maybe it's age related, but if any of the scenarios you wrote happened to me, I would not be embarrassed receiving someone's assistance.


You can write add-ins for Excel. I’ve used this .NET component library to build Winform apps that use the add-in interface to 13 years. Super simple and it uses the COM interface and supports all Windows Office versions.

https://www.add-in-express.com/add-in-net/index.php


Sure, stores use WiFi access points and BT to track MAC addresses and BT device IDs. Google does something similar with location and it provides in real time how busy a location is which I find super convenient. It’s a shame that shaping data into useful information also means it can weaponized.


Cheaper foreign vehicles will also hurt the automotive industry in Ontario, Canada. So, this is an interesting move from the Canadian fed govt.

https://www.investontario.ca/automotive


Canada has no domestic automaker and US automakers, under pressure from Trump, are closing some factories in Canada & relocating production to the US.

Yes, the Canadian auto industry will take a hit, but it already has from the US (and might take more).


If he has his way, Trump will kill Canada's automotive industry. If you accept this as forgone, maybe partnering with the Chinese to create a new auto industry is a good idea.


What automotive industry? Name a single Canadian auto manufacturer of significance?


Just because the companies building cars in Canada are headquartered in the USA doesn't mean Canada doesn't have an automotive industry. The factories, equipment, and workers are on Canadian soil could always be nationalized (not without retaliation of course).


The claim made in the blog is that real world data is locked to institutions. Examples are medical, insurance, and banking data.


This podcast episode addresses sports betting and touches on why religious groups condone it.


"...scrolling your feed, it actually makes you money" https://youtu.be/BKRd5M0TE6w?t=133

"...double tap to do a quick buy and immediately I trade $10 worth of Avantis" https://youtu.be/BKRd5M0TE6w?t=211


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: