Hacker Newsnew | past | comments | ask | show | jobs | submit | hacker_homie's commentslogin

For who, Roboute Guilliman be using the Cawl Inferior like chatGPT.

I really have loved the binary packages now I can install gentoo on my laptops.

The curves are a lie, the window is still square, can we stop putting lipstick on the pig, I just want my computers to work not look like some computer in a sci-fi movie.

I used https://github.com/rvaiya/keyd with ``` [ids] * [main] f23 = oneshot(control) [control] toggle(control) ``` To turn it back into a ctrl key

Yeah, that's because the original npus were a rush job, the amd AI Max is the only one that's worth anything in my opinion.

I have a Strix Halo 395 128GB laptop running Ubuntu from HP. I have not been able to do anything with the NPU. I was hoping it could be used for OpenCL, but does not seem so.

What examples do you have of making the NPU in this processor useful please?


All the videos I've seen of AI workloads with an AMD Strix Halo with 128GB setup have used the GPU for the processing. It has a powerful iGPU and unified memory more like Apple's M chips.

The Apple M series chips are solid for inference.

Correct me if I'm wrong, but I thought everyone was still doing inference on the GPU for Apple silicon.

The Apple M series is SoC. The CPU, GPU, NPU, RAM are all part of the chip.

The RAM is not part of the SoC. It's a bunch of separate commodity RAM dies packaged alongside the SoC.

Is that because of the actual processing unit or because they doubled the width of the memory bus?

It's because it comes with a decent iGPU, not because of the NPU inside of that. The NPU portion is still the standard tiny 50 TOPS and could be filled with normal RAM bandwidth like on a much cheaper machine.

On the RAM bandwidth side it depends if you want to look at it as "glass is half full" or "glass is half empty". For "glass is half full" the GPU has access to a ton of RAM at ~2x-4x the bandwidth of normal system memory an iGPU would have and so you can load really big models. For "glass is half empty" that GPU memory bandwidth is still nearly 2x less than a even a 5060 dGPU (which doesn't have to share any of that bandwidth with the rest of the system), but you won't fit as large of a model on a dGPU and it won't be as power efficient.

Speaking of power efficiency - it is decently power efficient... but I wouldn't run AI on battery on mine unless I was plugged in anyways as it still eats through the battery pretty quick when doing so. Great general workstation laptop for the size and wattage though.


You are thinking of std::swap, std::rotate does throw bad_alloc

I see it says that it may throw bad_alloc, but it's not clear why, since the algorithm itself (e.g see "Possible implementation" below) can easily be done in-place.

https://en.cppreference.com/w/cpp/algorithm/rotate.html

I'm wondering if the bad_alloc might be because a single temporary element (of whatever type the iterators point to) is going to be needed to swap each pair of elements, or maybe to allow for an inefficient implementation that chose not to do it in-place?


Time to go outside and touch grass it’s the only financially responsible thing to do until prices become reasonable again.

2026: Copilot chat boxes everywhere, Users typed everything. Peak convenience.

You type the question, Copilot tells you where to click.

"Bit higher, higher, no too far, down now, no, below the red line. The other red line. Yes this one... no the one you were just over".


Press any key to continue, or any other key to quit.

Probably because there's internal conflicts between the store team and the applications group, that neither of them want to deal with anymore, this might have been for the windows S support (remember store only windows).

They have their own distribution system, so they don't need this anymore.


clickonce for a brief shining moment was the closest we ever got to being able to deploy an application like a webpage.

I did run into a lot of issues with the store/winrt APIs where there were backdoors that the NTDev team used to work around all the limitations, but they would never publish them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: