Hacker Newsnew | past | comments | ask | show | jobs | submit | stqism's commentslogin

In a sense, they were partially right, while being wrong. Based on Puget’s data, it’s apparent that motherboard vendors overly aggressive default settings helped contribute to the issue being so prominent, when reasonable settings would fail at a lower rate than comparable zen CPUs.

Obviously Intel messed up badly, and those settings shouldn’t result in this behavior, but maybe this will convince system integrators to have more reasonable defaults in the future.

In a top end system, we’re already sitting in territory where our GPU is our benchmark, do we really need to default to giving the cpu so much power?


Even Puget's data, which due to their conservative MB configurations have much less Raptor Lake defects than others with aggressive settings, show an essential difference between the defects of Zen 4 and the defects of Raptor Lake.

The defects of Zen 4 are random manufacturing defects, so most of them are detected by Puget after assembling and testing their systems, before selling them to customers.

On the other hand, most of the Raptor Lake defects happen after some time after selling them to the customers, which implies some kind of wearing mechanism, which either can affect any Raptor Lake CPU or perhaps only CPUs that have some kind of latent defects.

Because the Raptor Lake defects happen after some time, it is likely that their number will continue to raise among the already sold systems and the same statistics recomputed after some months might show a higher number of Raptor Lake defects than now.


They didn't pull those numbers out of thin air, those were intel's specs when those boards were designed. They are, obviously, dangerous specs to run a chip at, hindsight being 2020 and all.

Intel trying to pass the buck is as much of a problem as the CPU's themselves really, because now you can't trust them.


There is nothing "on spec" about 4096W power limits and using the single-core clock multiplier for multi-core boost, among other deviations.

Intel programming the voltage curves wrong is on them, but that doesn't matter if the motherboards aren't going to run the CPUs according to specification out of the box. Intel calling out mobo vendors for their stupid defaults was justified and very much needed.


The issue is intel guidelines are basically nonsensical and contradictory. What they claim is "recommended" settings is basically three separate sets of options, with no clear indication which is the actual so-called baseline. Which was probably done entirely on purpose as to faciliate blame slinging.


Intel's specifications are readily available[1][2] to the public. If you can't understand them that's your problem, not Intel's.

Incidentally, there is no such thing as a "baseline". Intel separately specifies an "Extreme Config" for applicable SKUs (the i9s), but otherwise there is only the one set of specifications.

The fact you are talking about "baseline" suggests you did not actually consult the specifications published by Intel, just like the mobo vendors who put out so-called "Intel Baseline Profiles" before they got chastised again for not actually reading and obeying the specs (and arguably they still don't).

[1]: https://edc.intel.com/content/www/us/en/design/products/plat...

[2]: https://edc.intel.com/content/www/us/en/design/products/plat...


This it not what I am referring to. I am referring to the chart posted in their official community post, most recently in June [1]. The chart is labelled "Intel Recommendations: 'Intel Default Settings'" (sic). Notice how "Baseline" is incomplete, and so is "Extreme". Also notice a bunch of notes saying "Intel does not recommend baseline" included on their "recommendations" chart. There's more of little gotchas like that if you pay attention. Also note that this chart has been quietly revised at least once as I have a version from back in April that was less stringent and less guarded with notes than it is now.

[1]https://community.intel.com/t5/Processors/June-2024-Guidance...


>Notice how "Baseline" is incomplete, and so is "Extreme".

Yeah, you still haven't read the specifications.

Please read the fucking specifications if you are going to partake in discussions concerning specifications.

Extreme is "incomplete" because those specifications apply and only apply to Raptor Lake i9 SKUs. "Baseline" is incomplete and not recommended because "baseline" does not exist in the specifications.

What's more, "performance" also does not exist in the specifications per se. Most of it is actually the specifications copied verbatim, except for PL1 which is 125W for the concerned SKUs according to specification and actually noted as such by Intel in that chart.

The chart also excludes other important information, such as the PL2 time limits (56 seconds for the SKUs in the chart), the max core voltage rating of 1.72V, and AC/DC load lines and associated calibration.

Again: Please read the fucking specifications. You are contributing to the media sensationalism and emotional chest thumping, which is all worthless noise.


Ironically, I’ve just started asking LLMs to summarize paywalled content, and if it doesn’t answer my question I’ll check web archives or ask it for the full articles text.


Attended both keynotes, but the data I’ve seen suggests Intels offerings are higher IPC + lower power VS the respective AMD AI CPUs.


Intel themselves claim 45 TOPS (IPC isn't a thing in NPUs) coming from their NPU. AMD didn't reveal the chip total of the new Ryzen series, but their NPU gets 50 TOPS. The only reason Intel's numbers are so much higher ("120 TOPS!") is that Intel included their chip total (what the GP dies can achieve, but at far lower power efficiency) with the NPU numbers, AMD doesn't include this. Presumably their GP cores would be able to achieve similar numbers.

Given that AMDs NPUs were already ~78% more power efficient than Intel's and both are claiming ~50% power efficiency increases, I'm not sure why there would be a big upset here.

Not really defending AMD here, but they have been investing hard into NPUs for about four-five generations now, while Intel only really hopped on the bandwagon last generation. Unless you just believe Intel has the best engineers in the world, period; there's no reason to believe they would close the gap that quickly.


It sounds dumb, but honestly modern browsers feel way closer to the JRE than they did things designed to fetch and render sites.


Probably need to check on this, but I’ve heard of issues around video decode and Firefox.


While I’d never personally use Chrome, you can describe this in another way too.

“Chrome supports draft standards like webUSB, which more and more hardware tools and platforms have started to adopt to enable being able to support users regardless of platform, without needing to design native apps for them.”

You can argue this is good in other ways too, it means that instead of a potentially invasive hardware application for something you might configure or update once, you are using something heavily sandboxed that has to request permissions for anything outside the normal. Another benefit is that depending on what the hardware device is, suddenly these hardware devices can be configured on platforms like Linux and FreeBSD, where vendors are much less inclined to support or cater to natively.

Say what you want about draft standards, but Firefox not playing ball and adopting commonly used ones is a massive miss on its part that hurts its ability to be competitive.


PCIe issues are only really a thing with BCLK overclocking on systems that lack a secondary external clock generator. BCLK overclocking is a pretty uncommon practice that isn’t practical for day to day usage.


With the hacker comment, in fairness the weakest link in a lot of orgs is often the human one. Ignoring obvious stuff like phishing links, people can be disillusioned by their employer or their government through propaganda and other campaigns run by their adversaries. The westerners who supported ISIS, etc didn’t just do so in a vacuum out of the blue.


While you’re probably right, this isn’t super relevant. From Google’s perspective, they just want to auction off the advertising slot and get that view, the actual click through rate on that ad is a secondary issue.


Clicked in to it to see, it’s monthly. States that the next bill is 30 days after it goes live.


That is correct. I'm testing if there's interest, hence the pre-payment. The plans listed on the website are on a monthly basis.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: