Hacker Newsnew | past | comments | ask | show | jobs | submit | seba_dos1's commentslogin

> I believe Steam Linux Runtime is an attempt at fixing this,but I'm not sure about its effectiveness.

It's effective enough for it to be practically a solved problem now.


You compile in a container/chroot with the userspace you target. Done.

In the context of games, that will likely be Steam Runtime.


it's not that simple. you want to be able to use a modern toolchain (compilers that support the latest standards) but build a binary that runs on older systems.

the only way to achieve that is to get the older libraries installed on a newer system, or you could try backporting the new toolchain to the older system. but that's a lot harder.


It may be hard-ish, sometimes. Sometimes it's a breeze. And sometimes you can just use host's toolchain with container's sysroot and proceed as if you were cross-compiling. Most of the time it's not a big deal.

It's a handy tool, but it doesn't even give you a reasonable zram size by default and doesn't touch other things like page-cluster, so "I don't even need to set anything up" applies only if you don't mind it being quite far from optimal.

And you can use a zram-backed ram0 if you're still undecided :D

Online-upgrading a rolling distro with a browser running is enough to see it happen regularly.

A simpler alternative to OOM daemons could be enabling MGLRU's thrashing prevention: https://www.kernel.org/doc/html/next/admin-guide/mm/multigen...

I'm using it together with zram sized to 200% RAM size on a low RAM phone with no disk swap (plus some tuning like the mentioned clustering knob) and it works pretty well if you don't mind some otherwise preventable kills, but I will happily switch to diskless zswap once it's ready.


> showing that you care about how other people see you is a proxy signal for caring about other people

How so?


Genuine question - what else did you expect?

For it to follow the instructions I had for it. Call me naive and stupid for thinking the 1M context window on the brand new model would actually, y'know, work.

That's a bit anthropomorphic though.

When LLMs become able to reflectively examine their own premises and weight paths, they will exceed the self-awareness of ordinary humans.


Just dealt with this last night with Claude repeatedly risking a full system crash by failing to ensure that the previous training run of a model ended before starting the next one.

It's a pretty strange issue, makes me feel like the 1M context model was actually a downgrade, but it's probably something weird about the state of its memory document. I wasn't even very deep into the context.


why would further chance at context pollution be a good thing? i feel like it is easier for data to get lost in a larger context

It doesn’t reason or explicitly follow instructions, it generates plausible text given a context.

There's a vast space between premature optimization and not caring about optimization until it bites you, and both extremes make you (or someone else) miserable.

> But your point about mesa expecting wayland-client is a very tight binding here.

You don't have to use Mesa's wayland-egl to make EGL work with Wayland, you can easily pass dmabufs by yourself - though this will theoretically be less portable as dmabufs are Linux specific (but practically they're also implemented by various BSDs).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: