Fascinating. Just yesterday the author added a `SECURITY.md` file to the `xz-java` project.
> If you discover a security vulnerability in this project please report it
privately. *Do not disclose it as a public issue.* This gives us time to
work with you to fix the issue before public exposure, reducing the chance
that the exploit will be used before a patch is released.
Reading that in a different light, it says give me time to adjust my exploits and capitalize on any targets. Makes me wonder what other vulns might exist in the author's other projects.
In this particular case, there is a strong reason to expect exploitation in the wild to already be occurring (because it's an intentional backdoor) and this would change the risk calculus around disclosure timelines.
But in the general case, it's normal for 90 days to be given for the coordinated patching of even very severe vulnerabilities -- you are giving time not just to the project maintainers, but to the users of the software to finish updating their systems to a new fixed release, before enough detail to easily weaponize the vulnerability is shared. Google Project Zero is an example of a team with many critical impact findings using a 90-day timeline.
As someone in security who doesn't work at a major place that get invited to the nice pre-notification notifications, I hate this practice.
My customers and business are not any less important or valuable than anyone else's, and I should not be left being potentially exploited, and my customers harmed, for 90 more days while the big guys get to patch their systems (thinking of e.g. Log4J, where Amazon, Meta, Google, and others were told privately how to fix their systems, before others were even though the fix was simple).
Likewise, as a customer I should get to know as soon as someone's software is found vulnerable, so I can then make the choice whether to continue to subject myself to the risk of continuing to use it until it gets patched.
> My ... business are not any less ... valuable than anyone else's,
Plainly untrue. The reason they keep distribution minimal is to maximise the chance of keeping the vuln secret. Your business is plainly less valuable than google, than walmart, than godaddy, than BoA. Maybe you're some big cheese with a big reputation to keep, but seeing as you're feeling excluded, I guess these orgs have no more reason to trust you than they have to trust me, or hundreds of thousands of others who want to know. If they let you in, they'd let all the others in, and odds are greatly increased that now your customers are at risk from something one of these others has worked out, and either blabbed about or has themselves a reason to exploit it.
Similarly plainly, by disclosing to 100 major companies, they protect a vast breadth of consumers/customer-businesses of these major companies at a risk of 10,000,000/100 (or even less, given they may have more valuable reputation to keep). Changing that risk to 12,000,000/10,000 is, well, a risk they don't feel is worth taking.
> Your business is plainly less valuable than google, than walmart, than godaddy, than BoA.
The company I work for has a market cap roughly 5x that of goDaddy and we're responsible for network connected security systems that potentially control whether a person can physically access your home, school, or business. We were never notified of this until this HN thread.
If your BofA account gets hacked you lose money. If your GoDaddy account gets hacked you lose your domain. If Walmart gets hacked they lose... What money and have logistics issues for a while?
Thankfully my company's products have additional safeguards and this isn't a breach for us. But what if it was? Our customers can literally lose their lives if someone cracks the security and finds a way to remotely open all the locks in their home or business.
Don't tell me that some search engine profits or someone's emails history is "more valuable" than 2000 schoolchildren's lives.
How about you give copies of the keys to your apartment and a card containing your address to 50 random people on the streets and see if you still feel that having your Gmail account hacked is more valuable.
I think from an exposure point of view, I'm less likely to worry about the software side of my physical security being exploited that the actual hardware side.
None of the points you make are relevant since I have yet to see any software based entry product whose software security can be concidered more than lackluster at best, maybe your company is better since you didn't mention a name I can't say otherwise.
What I'm saying is your customers are more likely to have their doors physically broken than remotely opened by software and you are here on about life and death because of a vuln in xz?
If your companies market cap is as high as you say and they are as security aware as you say why aren't they employing security researchers and actively on the forefront of finding vulns and reporting them? That would get them an invite to the party.
Sorry, but that's not a serious risk analysis. The average person would be hurt a lot more by a godaddy breach by a state actor than by a breach of your service by a state actor.
But I don't want anyone else to get notified immediately because the odds that somebody will start exploiting people before a patch is available is pretty high. Since I can't have both, I will choose the 90 days for the project to get patches done and all the packagers to include them and make them available, so that by the time it's public knowledge I'm already patched.
I think this is a Tragedy of the Commons type of problem.
Caveat: This assume the vuln is found by a white hat. If it's being exploited already or is known to others, then I fully agree the disclosure time should be eliminated and it's BS for the big companies to get more time than us.
OpenSSL's "notification of an upcoming critical release" is public, not private.
You do get to know that the vulnerability exists quickly, and you could choose to stop using OpenSSL altogether (among other mitigations) once that email goes out.
Yeah I worked in FAANG when we got the advance notice of a number of CVEs. Personally I think it's shady, I don't care how big Amazon or Google is, they shouldn't get special privileges because they are a large corporation.
I don't think the rationale is that they are a large corporation or have lots of money. It's that they have many, many, many more users that would be affected than most companies have.
I imagine they also have significant resources to contribute to dealing with breaches - eg, analysing past cookouts by the bad actor, designing mitigations, etc.
If OP is managing something that is critical to life - think fire suppression controllers, or computers that are connected to medical equipment, I think it becomes very difficult to compare that against financial assets.
At a certain scale, "economic" systems become critical to life. Someone who has sufficiently compromised a systemically-important bank can do things that would result in riots breaking out on the street all over a country.
You could use the EPA dollar to life conversion ratio.
Though anything actually potentially lethal shouldn't really have a standard Internet connection. E.g. nuclear power plants, trains, planes controls, heavy industrial equipment, nuclear weapons...
In that case OP should not design systems were a sshd compromise can have a life-threatening impact. Just because it's easier for everything to be controlled from the cloud doesn't mean that others need to feel sympathy when that turnes out to be as bad of an idea as everyone else has said.
a. Use commercial OS vendors who will push out fixes.
b. Set up a Continuous Integration process where everything is open source and is built from the ground up, with some reliance on open source platforms such as distros.
One needs different types of competence and IT Operational readiness in each approach.
> b. Set up a Continuous Integration process where everything is open source and is built from the ground up, with some reliance on open source platforms such as distros.
Whether its reasonable is debatable, but that type of time frame is pretty normal for things that aren't being actively exploited.
This situation is perhaps a little different as its not an accidental bug waiting to be discovered but an intentionally placed exploit. We know that a malicious person already knows about it.
Detecting a security issue is one thing. Detecting a malicious payload is something completely different. The latter has intent to exploit and must be addressed immediately. The former has at least some chance of noone knowing about it.
I think you have to take the credibility of the maintainer into account.
If it's a large company, made of people with names and faces, with a lot to lose by hacking its users, they're unlikely to abuse private disclosure. If it's some tiny library, the maintainers might be in on it.
Also, if there's evidence of exploitation in the wild, the embargo is a gift to the attacker. The existence of a vulnerability in that case should be announced, even if the specifics have to be kept under embargo.
In this case the maintainer is the one who deliberately introduced the backdoor. As Andres Freund puts it deadpan, "Given the apparent upstream involvement I have not reported an upstream bug."
> imho it depends on the vuln. I've given a vendor over a year, because it was a very low risk vuln.
But why? A year is a ridiculous time for fixing a vulnerability even a minor one. If a vendor is taking that long its because they don't prioritize security at all and are just dragging their feet.
I've always laughed my ass off at the idea of a disclosure window. It takes less than a day to find RCE that grants root privileges on devices that I've bothered to look at. Why on earth would I bother spending months of my time trying to convince someone to fix something?
If this question had a reliable (and public) answer then the world would be a very different place!
That said, this is an important question. We, particularly those us who work on critical infrastructure or software, should be asking ourselves this regularly to help prevent this type of thing.
Note that it's also easy (and similarly catastrophic) to swing too far the other way and approach all unknowns with automatic paranoia. We live in a world where we have to trust strangers every day, and if we lose that option completely then our civilization grinds to a halt.
But-- vigilance is warranted. I applaud these engineers who followed their instincts and dug into this. They all did us a huge service!
Yeah thanks for saying this; I agree. And as cliche as it is to look for a technical solution to a social problem, I also think better tools could help a lot here.
The current situation is ridiculous - if I pull in a compression library from npm, cargo or Python, why can that package interact with my network, make syscalls (as me) and read and write files on my computer? Leftpad shouldn’t be able to install crypto ransomware on my computer.
To solve that, package managers should include capability based security. I want to say “use this package from cargo, but refuse to compile or link into my binary any function which makes any syscall except for read and write. No open - if I want to compress or decompress a file, I’ll open the file myself and pass it in.” No messing with my filesystem. No network access. No raw asm, no trusted build scripts and no exec. What I allow is all you get.
The capability should be transitive. All dependencies of the package should be brought in under the same restriction.
In dynamic languages like (server side) JavaScript, I think this would have to be handled at runtime. We could add a capability parameter to all functions which issue syscalls (or do anything else that’s security sensitive). When the program starts, it gets an “everything” capability. That capability can be cloned and reduced to just the capabilities needed. (Think, pledge). If I want to talk to redis using a 3rd party library, I pass the redis package a capability which only allows it to open network connections. And only to this specific host on this specific port.
It wouldn’t stop all security problems. It might not even stop this one. But it would dramatically reduce the attack surface of badly behaving libraries.
The problem we have right now is that any linked code can do anything, both at build time and at runtime. A good capability system should be able to stop xz from issuing network requests even if other parts of the process do interact with the network. It certainly shouldn't have permission to replace crc32_resolve() and crc64_resolve() via ifunc.
Another way of thinking about the problem is that right now every line of code within a process runs with the same permissions. If we could restrict what 3rd party libraries can do - via checks either at build time or runtime - then supply chain attacks like this would be much harder to pull off.
I'm not convinced this is such a cure-all as any library must necessarily have the ability to "taint" its output. Like consider this library. It's a compression library. You would presumably trust it to decompress things right? Like programs? And then you run those programs with full permission? Oops..
It’s not a cure-all. I mean, we’re talking about infosec - so nothing is. But that said, barely any programs need the ability to execute arbitrary binaries. I can’t remember the last time I used eval() in JavaScript.
I agree that it wouldn’t stop this library from injecting backdoors into decompressed executables. But I still think it would be a big help anyway. It would stop this attack from working.
At the big picture, we need to acknowledge that we can’t implicitly trust opensource libraries on the internet. They are written by strangers, and if you wouldn’t invite them into your home you shouldn’t give them permission to execute arbitrary code with user level permissions on your computer.
I don’t think there are any one size fits all answers here. And I can’t see a way to make your “tainted output” idea work. But even so, cutting down the trusted surface area from “leftpad can cryptolocker your computer” to “Leftpad could return bad output” sounds like it would move us in the right direction.
Of course we need to trust people to some degree. There's an old Jewish saying - put your trust in god, but your money in the bank. I think its like that. I'm all for trusting people - but I still like how my web browser sandboxes every website I visit. That is a good idea.
We (obviously) put too much trust in little libraries like xz. I don't see a world in which people start using fewer dependencies in their projects. So given that, I think anything which makes 3rd party dependencies safer than they are now is a good thing. Hence the proposal.
The downside is it adds more complexity. Is that complexity worth it? Hard to say. Thats still worth talking about.
i guess the big opensource community should put a little bit more trust in statistics or integrate statistic evaluation in their decission making to use specific products in their supply chains.
This approach could work for dynamic libraries, but a lot of modern ecosystems (Go, Rust, Swift) prefer to distribute packages as source code that gets compiled with the including executable or library.
The goal is to restrict what included libraries can do. As you say, in languages like Rust, Go or Swift, the mechanism to do this would also need to work with statically linked code to work. And thats quite tricky, because there are no isolation boundaries between functions in executables.
It should still be possible to build something like this. It would just be inconvenient. In rust, swift and go you'd probably want to implement something like this at compile time.
In rust, I'd start by banning unsafe in dependencies. (Or whitelisting which projects are allowed to use unsafe code.) Then add special annotations on all the methods in the standard library which need special permissions to run. For example, File::open, fork, exec, networking, and so on. In cargo.toml, add a way to specify which permissions your child libraries get. "Import serde, but give it no OS permissions". When you compile your program, the compiler can look at the call tree of each function to see what actually gets called, and make sure the permissions match up. If you call a function in serde which in turn calls File::open (directly or indirectly), and you didn't explicitly allow that, the program should fail to compile.
It should be fine for serde to contain some utility function that calls the banned File::open, so long as the utility function isn't called.
Permissions should be in a tree. As you get further out in the dependency tree, libraries get fewer permissions. If I pass permissions {X,Y} to serde, serde can pass permission {X} to one of its dependencies in turn. But serde can't pass permission {Q} to its dependency - since it doesn't have that capability itself.
Any libraries which use unsafe are sort of trusted to do everything. You might need to insist that any package which calls unsafe code is actively whitelisted by the cargo.toml file in the project root.
>It should still be possible to build something like this. It would just be inconvenient.
Inconvenient is quite the understatement. Designing and implementing something like this for each and every language compiler/runtime requires hugely more effort than doing it on the OS level. The likelihood of mistakes is also far greater.
Perhaps it's worth exploring whether it can be done on the LLVM level so that at least some languages can share an implementation.
A process can do little to defend itself from a library it's using which has full access to its same memory. There is no security boundary there. This kind of backdoor doesn't hinge on IFUNC's existence.
Honestly, I don't have a lot of hope that we can fix this problem for C on linux. There's just so much historical cruft in present, spread between autotools, configure, make, glibc, gcc and C itself that would need to be modified to support capabilities.
The rule we need is "If I pull in library X with some capability set, then X can't do anything not explicitly allowed by the passed set of capabilities". The problem in C is that there is currently no straightforward way to firewall off different parts of a linux process from each other. And dynamic linking on linux is done by gluing together compiled artifacts - with no way to check or understand what assembly instructions any of those parts contain.
I see two ways to solve this generally:
- Statically - ie at compile time, the compiler annotates every method with a set of permissions it (recursively) requires. The program fails to compile if a method is called which requires permissions that the caller does not pass it. In rust for example, I could imagine cargo enforcing this for rust programs. But I think it would require some changes to the C language itself if we want to add capabilities there. Maybe some compiler extensions would be enough - but probably not given a C program could obfuscate which functions call which other functions.
- Dynamically. In this case, every linux system call is replaced with a new version which takes a capability object as a parameter. When the program starts, it is given a capability by the OS and it can then use that to make child capabilities passed to different libraries. I could imagine this working in python or javascript. But for this to work in C, we need to stop libraries from just scanning the process's memory and stealing capabilities from elsewhere in the program.
Or take the Chrome / original Go approach: load that code in a different process, use some kind of RPC. With all the context switch penalty... sigh, I think it is the only way, as the MMU permissions work at a page level.
Firefox also has its solution of compiling dependencies to wasm, then compiling the wasm back into C code and linking that. It’s super weird, but the effect is that each dependency ends up isolated in bounds checked memory. No context switch penalty, but instead the code runs significantly slower.
> We, particularly those us who work on critical infrastructure or software
We should also be asking ourselves if we are working on critical infrastructure. Lasse Collin probably did not consider liblzma being loaded by sshd when vetting the new maintainer. Did the xz project ever agree to this responsibility?
We should also be asking ourselfs if each dependency of critical infrastructure is worth the risk. sshd linking libsystemd just to write a few bytes into an open fd is absurd. libsystemd pulling in liblzma because hey it also does compressed logging is absurd. Yet this kind of absurd dependency bloat is everywhere.
We live in a time of populous, wealthy dictatorships that have computer-science expertise are openly hostile to the US and Canada.
North America is only about 5% of the world's population. [1] (We can assume that malicious actors are in North America, too, but this helps to adjust our perspective.)
The percentage of maliciousness on the Internet is much higher.
Huh? The empirical evidence we have - thanks to Snowden leaks - paints a different picture. NSA is the biggest malicious actor with nearly unlimited resources at hand. They even insert hardware backdoors and intercept shipment to do that.
Honestly it seems like a state-based actor hoping to get whatever high value target compromised before it's made public. Reporting privately buys them more time, and allows them to let handlers know when the jig is up.
> If you discover a security vulnerability in this project please report it privately. *Do not disclose it as a public issue.* This gives us time to work with you to fix the issue before public exposure, reducing the chance that the exploit will be used before a patch is released.
Reading that in a different light, it says give me time to adjust my exploits and capitalize on any targets. Makes me wonder what other vulns might exist in the author's other projects.