Hacker Newsnew | past | comments | ask | show | jobs | submit | cedws's commentslogin

It's not about expectation of work (well, there's some entitled people sure.)

It's about throwing away the effort the reporter put into filing the issue. Stale bots disincentivise good quality issues, makes them less discoverable, and creates the burden of having to collate discussion across N previously raised issues about the same thing.

Bug reports and FRs are also a form of work. They might have a selfish motive, but they're still raised with the intention of enriching the software in some way.


It's like playing The Witness. Somebody should set LLMs loose on that.

I don't think you can put them into buckets like that. All addiction is driven in persuit of a reward. The magnitude of reward can be estimated with brain scans and stuff but to my understanding isn't universal in all humans.

Can we definitely say gambling addiction is less serious than alcohol addiction when there's individuals who find the former harder to quit than the latter?


Wasn't Zuckerberg caught red handed in emails signing off on this? When is he going to be facing consequences?

Corporate liability isolation has become absurd. People who make decisions that harm people should be held to account for those decisions even if they structured their decision making apparatus in a legal way that makes it look like they're just following the orders of the shareholders.

Zuckerberg has a brain, he decided to take this action, it is absurd he is not being hit with a personal penalty.


Consequences are for poor people.

Serve data to users at their expense instead of yours.

I've wanted something like this for a while. S3 Requester Pays kind of achieves the same thing but requires the requester to have a funded AWS account.


that's an interesting take, I'm curious to what a pricing model for a setup like that could look like at an enterprise tier

I could see stuff like how some B2C things sell stuff like PDF, videos, tutorials, at a flat-fee, but usually B2B enterprise ends up with a metered pricing so negotiating that could end up with many dynamic urls which sounds like a giant mapping table on top of the protocol


I'm thinking along the lines of wikis, archival projects, and yeah pay-per-view video/photo websites. Torrents only work for static data and aren't very performant.

This is a fair way to allow AI scraping too: you can let bots scrape as much as they like, as long as they cover the cost of serving that content. The bots don't have to enter payment information into every website they want to scrape.


If I remember correctly UK players can no longer chat at all until they verify their ID.

This is the security shortcuts of the past 50 years coming back to bite us. Software has historically been a world where we all just trust each other. I think that’s coming to an end very soon. We need sandboxing for sure, but it’s much bigger than that. Entire security models need to be rethought.

This assumes that we can get a locked down, secure, stable bedrock system and sandbox that basically never changes except for tiny security updates that can be carefully inspected by many independent parties.

Which sounds great, but the way things work now tend to be the exact opposite of that, so there will be no trustable platform to run the untrusted code in. If the sandbox, or the operating system the sandbox runs in, will get breaking changes and force everyone to always be on a recent release (or worse, track main branch) then that will still be a huge supply chain risk in itself.


The secure boot "shim" is a project like this. Perhaps we need more core projects that can be simple and small enough to reach a "finished" state where they are unlikely to need future upgrades for any reason. Formal verification could help with this ... maybe.

https://wiki.debian.org/SecureBoot#Shim


  > This assumes that we can get a locked down, secure, stable bedrock system and sandbox that basically never changes except for tiny security updates that can be carefully inspected by many independent parties.
For the most part you can. Just version pin slightly-stale versions of dependencies, after ensuring there are no known exploits for that version. Avoid the latest updates whenever possible. And keep aware of security updates, and affected versions.

Don't just update every time the dependency project updates. Update specifically for security issues, new features, and specific performance benefits. And even then avoid the latest version when possible.


Sure, and that is basically what sane people do now, but that only works until something needs a security patch that was not provided for the old version, and changing one dependency is likely to cascade so now I am open to supply chain attacks in many dependencies again (even if briefly).

To really run code without trust would need something more like a microkernel that is the only thing in my system I have to trust, and everything running on top of that is forced to behave and isolated from everything else. Ideally a kernel so small and popular and rarely modified that it can be well tested and trusted.


Virtual machines are that - tiny surfaces to access the host system (block disk device, ...). Which is why virtual machine escape vulnerabilities are quite rare.

I feel like in some cases we should be using virtual machines. Especially in domains where risk is non-trivial.

How do you change developer and user habits though? It's not as easy as people think.


I think Bootstrappable Builds from source without any binaries, plus distributed code audits would do a better job than locking down already existing binaries.

https://bootstrappable.org/ https://github.com/crev-dev/


> This assumes that we can get a locked down, secure, stable bedrock system and sandbox that basically never changes except for tiny security updates that can be carefully inspected by many independent parties.

Not really. You should limit the attack surface for third-party code.

A linter running in `dir1` should not access anything outside `dir1`.


>Which sounds great, but the way things work now tend to be the exact opposite of that, so there will be no trustable platform to run the untrusted code in.

This is the problem with software progressivism. Some things really should just be what they are, you fix bugs and security issues and you don't constantly add features. Instead everyone is trying to make everything have every feature. Constantly fiddling around in the guts of stuff and constantly adding new bugs and security problems.


The NIH syndrome becoming best practice (a commenter below already says they "vibe-coded replacements for many dependencies") would also save quite a few jobs, I suspect. Fun times.

I've been doing that too. The downside is it's a lot of work for big replacements.

I've been thinking the same thing. And it's somewhat parallel to what happened to meditation vs. drugs. In the old world the dangerous insights required so many years of discipline that you could sort of trust that the person getting the insight would be ok. But then any idiot can get the insight by just eating some shrooms and oops, that's a problem. Mostly self-harm problem in that case. But the dynamic is somewhat similar to what's happening now with LLMs and coding.

Software people could (mostly) trust each other's OSS contributions because we could trust the discipline it took in the first place. Not any more.


In the old world the dangerous insights required so many years of discipline that you could sort of trust that the person getting the insight would be ok. But then any idiot can get the insight by just eating some shrooms and oops, that's a problem.

I would think humans have been using psychedelics since before we figured out meditation. Likely even before we were humans.


Ah yes the stoned ape hypothesis. I don't know if there is or will ever be evidence to support the hypothesis.

I also like the drunk monkey hypothesis.


What in the world are “the dangerous insights”?

“Society is a construct”, for starters?

That's babby's first insight. Most people figure this out on their own in kindergarten.

Supply-chain attacks long pre-date effective AI agentic coding, FWIW.

What we need is accountability and ties to real-world identity.

If you're compromised, you're burned forever in the ledger. It's the only way a trust model can work.

The threat of being forever tainted is enough to make people more cautious, and attackers will have no way to pull off attacks unless they steal identities of powerful nodes.

Like, it shouldn't be a thing that some large open-source project has some 4th layer nested dependency made by some anonymous developer with 10 stars on Github.

If instead, the dependency chain had to be tied to real verified actors, you know there's something at stake for them to be malicious. It makes attacks much less likely. There's repercussions, reputation damage, etc.


> The threat of being forever tainted is enough to make people more cautious

No it's not. The blame game was very popular in the Eastern Block and it resulted in a stagnant society where lots of things went wrong anyway. For instance, Chernobyl.


> What we need is accountability and ties to real-world identity.

Who's gonna enforce that?

> If you're compromised, you're burned forever in the ledger.

Guess we can't use XZ utils anymore cause Lasse Collin got pwned.

Also can't use Chalk, debug, ansi-styles, strip-ansi, supports-color, color-convert and others due to Josh Junon also ending up a victim.

Same with ua-parser-js and Faisal Salman.

Same with event-stream and Dominic Tarr.

Same with the 2018 ESLint hack.

Same with everyone affected by Shai-Hulud.

Hell, at that point some might go out of their way to get people they don't like burned.

At the same time, I think that stopping reliance on package managers that move fast and break things and instead making OS maintainers review every package and include them in distros would make more sense. Of course, that might also be absolutely insane (that's how you get an ecosystem that's from 2 months to 2 years behind the upstream packages) and take 10x more work, but with all of these compromises, I'd probably take that and old packages with security patches, instead of pulling random shit with npm or pip or whatever.

Though having some sort of a ledger of bad actors (instead of people who just fuck up) might also be nice, if a bit impossible to create - because in the current day world that's potentially every person that you don't know and can't validate is actually sending you patches (instead of someone impersonating them), or anyone with motivations that aren't clear to you, especially in the case of various "helpful" Jia Tans.


Accountability is on the people using a billion third party dependencies, you need to take responsibility for every line of code you use in your project.

If you are really talking about dependencies, I’m not sure you’ve really thought this all the way through. Are you inspecting every line of the Python interpreter and its dependencies before running? Are you reading the compiler that built the Python interpreter?

It's still smart to limit the amount of code (and coders) you have to trust. A large project like Python should be making sure it's dependencies are safe before each release. In our own projects we'd probably be better off taking just the code we need from a library, verifying it (at least to the extent of looking for something as suspect as a random block of base64 encoded data) and copying it into our projects directly rather than adding a ton of external dependencies and every last one of the dependencies they pull in and then just hoping that nobody anywhere in that chain gets compromised.

> real-world identity

This bit sounds like dystopian governance, antithetical to most open source philosophies.


Would you drive on bridges or ride in elevators "inspected" by anons? Why are our standards for digital infrastructure and software "engineering" so low?

I don't blame the anons but the people blindly pulling in anon dependencies. The anons don't owe us anything.


This option is available already in the form of closed-source proprietary software.

If someone wants a package manager where all projects mandate verifiable ID that's fine, but I don't see that getting many contributors. And I also don't see that stopping people using fraudulent IDs.


A business or government can (should) separately package, review, and audit code without involving upstream developers or maintainers at all.

Do you know who inspected a bridge before you drive over it?

There is no need of that bullshit. Guix can just set an isolated container in seconds not touching your $HOME at all and importing all the Python/NPM/Whatever dependencies in the spot.

This looks like the same TeamPCP that compromised Trivy. Notice how the issue is full of bot replies. It was the same in Trivy’s case.

This threat actor seems to be very quickly capitalising on stolen credentials, wouldn’t be surprised if they’re leveraging LLMs to do the bulk of the work.


What is the rational for the attacker spamming the relevant issue with bot replies? does this benefit them? Maybe it makes discussion impossible to confuse maintainers and delay the time to a fix?

Yes, trying to slow down response.

whats new isnt the shortcuts, its the cascading. one compromised trivy instance led to kics led to litellm led to dspy and crewai and mlflow and hundreds of mcp servers downstream. the attacker didnt need to find five separate vulnerabilities, they found one and rode the dependency graph. thats a fundamentally different threat model than what most security tooling is built around

First prove the solution wasn’t in the training data. Otherwise it’s all just vibes and ‘trust me bro.’

It amounts to an argument against pinning in a (IMO) weird world view where the package maintainer is responsible for the security of users' systems. That feels wrong. The user should be responsible for the security of their system, and for setting their own update policy. I don't want a volunteer making decisions about when I get updates on my machine, and I'm pretty security minded. Sure, make the update available, but I'll decide when to actually install it.

In a more broad sense I think computing needs to move away from these centralised models where 'random person in Nebraska'[0] is silently doing a bunch of work for everyone, even with good intentions. Decisions should be deferred to the user as much as possible.

[0]: https://xkcd.com/2347/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: