This is a national security issue — malicious actors based in other nation-states are raiding American companies. It seems that US defense forces are not up to the task of repelling these invaders — yet we're expecting individual companies to go up against them??
It will be a long, long time before the marketplace evolves sufficient technological measures to guard against state-sanctioned/possibly-state-sponsored malicious actors operating with impunity in a lawless environment.
> It will be a long, long time before the marketplace evolves sufficient technological measures to guard against state-sanctioned/possibly-state-sponsored malicious actors operating with impunity in a lawless environment.
Unfortunately, the marketplace -- at least certain segments of it -- are far beyond .mil/.gov in terms of capacity and sophistication. E.g., AWS's formal tools for code-level security is what DARPA's been yelling about doing for decades, but gov't contractors and the branches/agencies are unable/unwilling to catch.
I'm not sure how to fix gov or mil, but a good starting point within mil is to stop making career officers with theology and polisci degrees but zero CS training the first-line managers of cyber commands.
This upsets me a bit because I graduated from art school but got into DevOps because I was a Linux nerd since I was in my early teens. There is room for us. It doesn't make us terrible fits for the job. On the contrary, I think people with unconventional backgrounds can offer unconventional insights into software architecture.
What you got a degree in shouldn't be the make or break of your career.
> DevOps because I was a Linux nerd since I was in my early teens
You are an exception.
In the military and certain parts of the corporate world ("enterprise" companies mostly), there is a wide-spread and systemic problem with horrendously unqualified people managing software/IT groups. E.g., Susan Mauldin for a recent example.
We can allow space for self-taught people without opening the flood gates. No one should be in charge of IT security without first developing deep technical expertise at some point in their career.
I've never objected to being managed by anybody with an arts degree and never will. English, communications, philosophy, have at it. Some of my best technical managers majored in English.
Theology says something specific about how you process reality.
Sorry, this wasn't clear enough to you in my original comment. My point was that judging someone's ability to do CS negatively by their interest in theology, as presented by the person I was replying to, is refuted by a well-known example. Makes sense now?
Not necessarily - theology is just philosophy. Biblical literalism is an almost uniquely American phenomenon, there are plenty of Christians around the world and in the US who don't believe the earth was literally created 6000 years ago.
Applied Theology is an American evangelist field of study that focuses on shaping your life and the world around you to operate according to the will and word of the Christian Evangelist god.
"Everything from preaching to media technology, from helping with funerals to discipling [sic] unbelievers."
This is one of those very frustrating conversations to have online, not that different from people saying "but it's the People's Democratic Republic of whatever."
The Baptist thing called "applied theology" is a part of the Dominionist movement. Fundamentally it's a theocratic endeavor.
I agree that theology has been used in a thousand ways across two thousand years and I'm sad that if you have a degree in "Applied Theology" from a small religious college in the Midwest it's definitely not just "I was thinking of becoming a minister."
But dominionism is a real thing. People major in it, they drop out, they get other jobs, and then you have this record of their beliefs right there on their resumes. It'd be easier if they had no degree at all.
you could well be right that this is a dominion thing, though I don't see anything suggesting such - but then I can imagine it's not openly promoted, if so.
> Everything from preaching to media technology, from helping with funerals to discipling [sic] unbelievers
I don't see an issue with this, if discipling unbelievers means something similar to promoting the church, or missionary-esque behaviour (of the non-colonial form, obviously). If it means how to deal with atheists in a theocracy then I'm not a fan.
I totally agree with this and for a entry-level job, with no experience, I would be totally cool with ignoring the degree is the person was right. For a manager or higher level engineer, I believe some amount of prior experience is necessary, the degree is not a requirement but it may influence how much experience I am willing to accept as requisite for the position.
FWIW, I read it as a joke in part about how common it is in their experience to run into people without CS degrees who are in this field.
(And in part just an opportunity to play with language. I grew up in the Bible Belt. Gentle humor about religion/spirituality is something I am no stranger to.)
In Vernor Vinge’s scifi novel “A Fire Upon the Deep” applied theology is the study and appeasement of superhuman sentient AIs. :) Its a field where the study subjects either completely ignore the practicioners, or squashes them like bugs. Also by the sound of it it involves a lot of software archelogy. Who knows, if they studied, and survived, that kind of applied theology they might be good for technical leads.
I was trying to use rust to write a hyperdimensional version of Conway's game of life and sone suits showed up with papers for me to sign and said if I hadn't used rust we'd have all been dead.
career officers with theology and polisci degrees but zero CS training the first-line managers of cyber commands.
Is this true?
This can't be true. Surely they must have some CS experience?
I took my last bus off a USMC base so long ago I've raised a kid who's in med school since. Can someone with more recent experience chime in on whether or not this is hyperbole?
I was a cyber operator in the USAF for years, and finished up my active duty in late 2019. This is completely accurate. Not only are the first-line officers generally uneducated in CS/IT/security, the people training them are equally uneducated. This is apparently by design.
Once upon a time, I heard a story about Air Force officers working on some technical problem that required a relatively advanced education. Until a new CO moved in and said that that kind of thing wasn't going to wash; AF officers don't work, they manage.
It seems more that the most competent government agencies and private organizations are in the same general ballpark, but anyway that doesn't matter, because the ones getting hacked are the least competent of either group.
On whether it's being unable or unwilling: one could argue that various gov agencies actively invest in keeping software insecure, e.g. by buying up zero-days.
> It seems that US defense forces are not up to the task of repelling these invaders — yet we're expecting individual companies to go up against them??
They aren't? Has NORAD control been hacked? Any battleships or predator drones?
I admit it's a bad look when, for instance, a VA database is compromised and private information for millions of government employees are exposed, but I'd also be SHOCKED if the NSA were dedicating resources to protecting that data.
Outside of Snowden, what leaks of stuff "the US defense forces" are actually attempting to protect have been captured?
>yet we're expecting individual companies to go up against them?
The companies in question appear to not even be doing basic things like taking backups and making them immutable. I don't think anybody is expecting them to have perfect security, but it doesn't take a lot of effort to backup to a tape and stick it in iron mountain for 2 years, it just takes money.
I can't find it, but I'm pretty sure Iran has also lifted a drone of ours using a security hack. So if Iran can do it, I'm sure that means others have the same capacity.
Maybe security holes are just part and parcel to the whole enterprise. So you have to accept them and center your preparation around your response to such losses. How do you get back up and running? How do you operate without the asset that was compromised? And so on.
I agree, this is a classic do as I say not as I do situation. Punish businesses for paying out ransomware ransomes meanwhile the government(mainly local, but sometimes larger) pay out routinely when their systems are compromised.
There are so many levels on which our own govt is complicit in foreign crimes. There is no way our govt ever "fights" these attacks until we first eliminate the deeply corrupt forces that currently control it. Also very unlikely.
OTOH, this isn't just about bad actors raiding. This is also about terrible security practices that are easily avoidable with an ounce of expertise and giving a shit.
In addition to your idea (let's make believe for a moment...) how about the US govt itself sponsors these attacks, and then instead of demanding ransom, they just levy huge fines against the companies who have carelessly let this happen? Extending your analogy, this would be no different than fines or lawsuits for carelessness and failures in physical infrastructure.
And really it only takes a bad actor inside a company to circumvent most of the security on-site anyway. Until companies treat every accessor to the network as malicious, this will continue to go round in circles. Most businesses just don't have the budget to deal with security this way.
With proper backups outside the reach of most personnel, and requiring multiple signatories to delete them, a bad actor is not enough to bring the business down.
There needs to be something. I'm curious what we will come up with. Some kind of sentinel that sits on the network and does behavioural and threat analysis of all network traffic and prevents damage before it can happen perhaps?
When I hear something like "network police", I start drawing parallels to the real world. In the physical world, you generally don't have anonymous people running around (or at least most cases you have a way of linking any actions/crimes committed to a real, permanent identity), disguises aren't allowed, and trespassing is not permitted.
So the Internet could be like this if it was more regulated. Anonymous traffic could be prohibited...no more TOR nodes, no hands-off proxying of traffic, no "it's an open access point, I totally don't know who was creating that torrent traffic".
Would these sorts of laws be accepted, or would they simply result in more attempts to anonymize traffic?
I imagine that this is sort of what things are like in more authoritarian places like China. Is it effective there?
Such sentinel devices have been available for years. The trouble is they can't reliably identify attempts to exploit zero-days because the patterns are unknown.
Sure we do. Anti-money laundering regulation is basically just the government outsourcing the front-line policing of financial transactions to banks and businesses.
Anti-money laundering regulation requires you to obtain and log information about the customer, not to recognize money laundering and definitely not to solve it (except in very special cases applicable to mega-corporations).
I’ve been the AML Officer for several large financial institutions and I can assure you they absolutely do need to recognize money laundering. They’re required to have systems, policies, and procedures in place to identify, prevent, and detect money laundering.
Keyword "large". Yes, we require large companies to do much more than small ones. But we require even the small ones to deal with network breaches perfectly.
"It will be a long, long time before the marketplace evolves sufficient technological measures to guard against state-sanctioned/possibly-state-sponsored malicious actors operating with impunity in a lawless environment."
Only because we, as a society, have decided that we don't care about information security, to the extent that we protect incompetent, ignorant, or uncaring individuals and organizations from any liability for their actions. How much better off would we be if we had fined Experian $1M for every user account they lost? Or for any of the other breaches in the preceding decades? How much more careful would your average bootcamp grad be writing the code that forms attack surfaces if they had to pay liability insurance?
By and large, we have had a good idea how to make technological measures much more resistant to attack since the '80s. It's always been considered too difficult and too expensive, something we have put exactly no resources towards fixing. Ten years of a quarter of the collective budget we spend on using ML to violate privacy would probably take everyone except state sanctioned actors out of the picture.
Yeah I think this is the way it should be interpreted. If you get yourself in the position where you need to pay the ransom, your security practices have failed. This makes that possibility more expensive. This is not a ridiculous idea.
While it's always good to state the importance of testing your backups the fear that HN seems to have surrounding backups seems really out-of-touch. You would believe that everyone's backups are teetering on the edge of failure constantly. If your backup process is so brittle and complex that you assume that it will be broken when you need it you should probably do something about that. It's not that hard to have the things that store your data spit it out on a schedule to be transferred somewhere write-only and durable.
Oh sure, but if you have the data you can hire people to get it done. The important part is having the data, if the data is there (meaning not encrypted) money and time can get it working. I'd much prefer to have a good backup media vs having to find someone who knows how to read faded data from a way overused and well out of date tape, but there is technology to get back data from such bad backups.
The important part is having the backup in some form. Having a well tested restore ability is a great idea, but not nearly as important as having the backups in the first place. Most backup programs are designed for restore, even if you screwed up, odds are you can get the data back later.
Having a downed database server for eg. 24h (while you are out trying to hire people to fix the failing restore for you) will probably mean you lose many of your B2B service users. Your clients might also try out that shiny new SLA you promised them.
Backups also ensure business continuity - which might be more important than past data for some workflows.
Of course, time is a factor here. Even in the best case you can't be back up and running in less than an hour, and most likely the best case is already looking at days to get everything back to where it was.
Regardless, backups are the first priority. Then a tested restore procedure.
If you can't automatically restore, it's because you've lost data. There is a huge variance on the data value and difficulty or recreating it, but a backup should let you clearly and unambiguously recreate whatever system you are backing up.
Not exactly. There are a lot of possible scenarios where the data isn't last, but can't be automatically restored. A broken tape for example isn't lost data, but it will be a big pain to recover from. If the data isn't properly indexed you will have a big problem figuring out what file goes to what machine, but the data can still be there.
You will have a hard time to find a backup program that doesn't have a good and tested restore procedure. However that doesn't mean it works in your particular edge cases.
Even if the backups would work perfectly, this forced downtime might the best time to apply some change that your admins have known should be done for a while but couldn't afford the downtime. (you couldn't do a schema update, but there are some smaller config changes that still require taking the master database done for a bit)
Very few shops can do what you're describing. It's not something that is often worthwhile enough to get budget and engineering time. If you're reaching into your backups for a full restore then you're going to have data unavailability due to the nature of how cold backups work. Having delayed replicas and warm backups are a godsend when you have them but lots of shops don't want to pay at least 2x for their production db cluster and know the tradeoffs they're making.
> You would believe that everyone's backups are teetering on the edge of failure constantly.
Hum... Backups aren't normally "at the edge of failure", the procedure either works or doesn't work. One must test to ensure the procedure works and continue working after all the environment changes done today.
That is, except for proprietary formats, like Exchange. Those can fail at any time, retroactively.
A socialized defense force against these sort of ransom attacks seems a bit antithetical to the American culture - taxes would go up and profits down to provide a service that only exists for a few "market losers".
I suspect this bill will face significant lobbying against it by companies involved in secured backups along with the ransomware distributors themselves.
It's depends. One can setup a web server with BSD/Linux and nginx - it would be cheap to create and maintain and very hard/expensive to attack.
Balance is different when you have and a company with understaffed IT and all which usually goes along with this: software which is not updated for months if not years despite known vulnerabilities in it, legacy systems which are kept "just in case" because no-one knows what will be broken if they will be decommissioned, poorly managed credentials to external systems, and so on.
Suggesting that this is a government problem to solve, or an invasion, leads to a type of framing that harms you and everyone else in your country that has or uses computers.
It's also not really a national security issue. The USA will continue to exist and function as the USA even without gas pipelines and power generation.
"National security" isn't some blanket term to mean "large infrastructure required for major industries", it has a specific, defined meaning. Just because the feds use it as a blanket justification for a bunch of stuff doesn't mean we should embrace that usage, otherwise when everything is a matter of "national security" than nothing is. It's just like the overuse of the term "terrorism" to mean "any big crime".
> The USA will continue to exist and function as the USA even without gas pipelines and power generation.
Temporary outages, maybe. Sustained outages (or destruction) of gas pipelines and power generation, if systemic, would almost certainly mean mass starvation.
It leads in the direction. The better we get at defending against these "lesser" attacks, the better we are at defending against the bigger attacks that might happen. Also the less likely it is someone will develop those attacks in the first place because there is less incentive to learn how they work.
While banning such payments might remove the incentives, that also put a huge burden on the victim and the transition to better cybersecurity should be less disruptive than an outright ban.
Another solution that has no harms and only benefit is to require the reporting of every ransom payment. That would give the government the crypto transaction information to conduct taint and attribution analysis. It is currently illegal to knowingly use funds received from kidnapping or ransoms, and this reporting requirement would help the government enforce that.
It places on some victims to prevent a lot of others from becoming victims. That is fairly standard trade off in any society. Also this is always the case for paying ransom for hostages, and is only simply being extended for data.
Without a legal way to make payments, companies can no longer justify the tradeoff of paying up and fixing the leaks as they spring up. This incentivize them to give importance to security and actually overhauling their infrastructure properly.
If everybody agrees not to pay ransom and follows through, criminals won't try to hack companies to collect ransom.
But as an individual company, you can't coordinate with everybody else to prevent everybody from paying ransom, so not doing so puts you on disadvantage.
Philosophically, the usefulnes of government is to threaten violence to solve the coordination problems amongst individuals.
People rarely understand that the coordination solution is one of the most important powers the government offers over privatization.
They can tax/penalize undesired behavior. Individual companies may end up worse off, but as a whole the business community is better off with the rules in place.
This is why governments have to tackle pollution and climate change too. Companies are profit-motivated, and will only respond to what the economics demand. When governments start imposing demands with penalties attached, that's when companies actually start changing behaviors.
> Another solution that has no harms and only benefit is to require the reporting of every ransom payment. That would give the government the crypto transaction information to conduct taint and attribution analysis. It is currently illegal to knowingly use funds received from kidnapping or ransoms, and this reporting requirement would help the government enforce that.
Definitely a must have for now and on a international scale IMHO.
But a effective ban on ransom payments would still be the most effective measurement.
The problem is how do you effectively ban it?
For many countries such thing could never be enforced, so it wouldn't remove the sensitive for non-specific target ransomware attacks IMHO and as such won't work.
If a corporation pays, and ledger history can provide definitive proof, the corporation faces the same penalties as if they violated international sanctions. Corporations will need to onramp fiat to crypto somewhere, and FinCEN [1] will know based on SARs (Suspicious Activity Report)/CTRs (Currency Transaction Reports), or SWIFT if international monies transfers [2].
> For many countries such thing could never be enforced
so it follows that if a country bans ransoms payments criminals will just ingnore that country and focus on the next easiest target, putting pressure on every country to follow the example and ban it too.
I think this will be very effective at stopping the most sofisticated targeted attacks but won't have much effect on indiscriminate "viral" attacks because those attack indiscriminately and wrecking targets from banned countries at least would serve as detterrant for victims in countries that can.
Another fun idea via someone on Twitter: make randomware legal but capped at a low amount, e.g., $5k. (And, perhaps, public reporting required after a 1 year delay.) Now white hats can enter the market safely, many holes are discovered for relatively low prices, and the country's tech infrastructure as a whole is less vulnerable to serious attack.
One issue is that if the costs of finding different vulnerabilities are distributed along a spectrum, this only lets you find the <$5k ones.
The problem is ransomware is software just like all the others. What do you do when some random script kiddie in way over their head manages to encrypt a seriously important network but can't develop decryption tools - or worse, loses the encryption key? Nobody really wants to promote unsophisticated attackers in this space. At least the existing criminals give you proven tools once you pay the ransom.
> While banning such payments might remove the incentives,
They'd pretty obviously not; companies are already forced to pay a fine (or a ransom, but it's money spent) and it obviously does not incentivize them to properly secure their network. Adding a fine to pay to the government on top (or, more cynically, a tax) will not change much, except that stricken companies now get hit harder, as you said.
I think the grandparent means banning such payments might remove the incentives to hack in the first place, since a hacker can't expect to make any ransom revenue from a company that obeys such a ban.
Insurance companies are likely to disallow ransom payments in their entirety. Too much risk considering the security posture of most organizations.
Boards will, generally, still not fund and support effective security culture without steep penalties for breaches (i am in infosec and speak to c suite folks as part of my gig; breach impact, in their current form, are "cost of business"). “Show me the incentive, and I will show you the outcome.” – Charlie Munger
Paying the ransom should make you an accessory to the crime with jail time and all for the executive who cleared it. This should put an end to it pretty quickly.
And also put an end to the organisations ability to operate making potentially thousands of people instantly and needlessly unemployed because their bosses didn't think security was important.
That is the system working as intended. Organizations, and those that lead them that can't hack it in the modern economy must be removed in order to make room for those that can.
Ransomware gangs exist and thrive because their crime is profitable -- their victims are willing and able to pay. Make many of their victims unable to pay, and the equation changes.
If the hackers are brave... "Our IT security firm can solve infections of this particular ransomware, but only this one. We charge 20% of what the 'hackers' demand."
And the solution would be for this security firm to have Russian (allegedly ;-)) friends that deploy the ransomware and give them the decryption key. See, hacked company, you're not paying the hackers, you're paying IT security experts that are able to recover your data!
I think it's pretty difficult to meet the "known or should have known" threshold for that kind of thing.
The proxy company can always say they are using their super crypto cracking skills to reverse the encryption. The hacked company would have no way to know they were paying the hackers, nor any way to find out.
Obviously the proxy company is breaking the law, but they might be in some untouchable jurisdiction.
1. If you pay the hacker, we want money because you paid a hacker.
2. If you don't pay the hacker, we want money because you leaked your users' data.
The bottom-line is that if you're a victim of ransomware, the government joins the hacker, both of them kicking you while you're down and demanding money.
> The bottom-line is that if you're a victim of ransomware, the government joins the hacker, both of them kicking you while you're down and demanding money.
The rationale for outlawing ransom payments is that it eliminates the incentive for ransomware attacks.
The real question is whether "no-concessions" policies reduce the incidence of ransomware attacks. The answer to that question isn't obvious. However, conditional on no-concessions working in the case of ransomware, "kicking corps while they're down" is not a relevant consideration. The cooperate-cooperate quadrant of the game has higher expected value than the defect quadrants, so you force cooperation by whatever means necessary, even if that means some actors don't get the best possible outcome from their own perspective.
NB: there's some evidence that no-concessions policies don't work particularly well in the case of kidnapping [1]... I'd take care extending this finding to ransomware gangs. If you read the whole PDF, it'll become clear why this behavior is interesting but might not transfer to today's ransomware gangs. That said, when crafting policy on ransomware attacks, it's worth keeping in mind that ransomware attackers may or may not be of the homo economicus species. At the very least as an assumption that you start with but are open to dropping as new evidence prevents itself.
Penalizing you for being blackmailed doesn’t force cooperation. Because an alternative is paying quietly and not informing the authorities. This is what this model encourages in practice. Less cooperation.
The government should fine the company either way for not properly securing their user's data. Security is serious business, it's time companies took it more seriously.
> Part of a good security posture is protecting yourself against insider threats. If you're not doing this, you're not taking cybersecurity seriously.
How do you protect yourself? There are ways to mitigate, surely, but any failure can be a catastrophic incident, and it is literally impossible to protect against all internal threats (in the sense of guaranteeing that no such threat is ever acted upon). All else aside, it just shifts the responsibility one level up: now you have to worry about a compromise of the people responsible for protecting from internal threats.
Security is hard, and the difficulty of answering this question in any particular org probably takes up a lot of the time of any competent and properly staffed CISO office.
But, basically, the only mechanisms in play are some combination of limiting access and, where that's not possible, decreasing employees' ability/incentive to defect.
This is a bit like the question how to have a system that promotes honest, smart politicians. As you might guess, nobody has figured that out yet.
Ultimately the only way is an omniscient, omnipresent CEO who does all the important stuff alone. Which is probably the core reason why no one has leaked God's files on the Universe, yet.
Oh yeah let's talk about how perfect is the enemy of better, when discussing an idea to bury victims of ransomware into the ground with government penalties on top of ransom and leaks.
Here's another thought in the same vein: let's penalize rape victims for attracting male gaze and not fighting sufficiently to avert contact. Sure, some women will get raped still, but let's not let perfect be the enemy of better. That's how they deal with it in some countries actually. They blame the victim. It doesn't reduce rape at all. In fact it reduces reported rape, because women don't want to face the legal and family repercussions of getting raped.
Let me tell you what will happen in the case of ransomware.
1. You get hit by ransomware.
2. Previously you'd ponder contacting authorities. Nope. They're gonna close your options and fine you either way. Keep your mouth shut.
3. Pay as quickly as possible and hope the word never comes out you were blackmailed at all. As far as the world and the government know, your security is fine, nothing happened. No fines, no lawsuits.
4. Result: ransomware proliferates and grows into the biggest organized crime organizations of this century.
How's that about not letting perfect be the enemy of better?
...In most cases, the CEO and probably a huge number of people in upper management can do any number of things to nuke a company from orbit. But this doesn't happen very often. The things that those people can do to nuke a company from orbit are typically tightly controlled functions, and the people with those responsibilities are carefully selected and extremely well-compensated.
Yes, some employees need to be absolutely trusted. No, you don't need to absolutely trust every employee (or even most employees).
Turning to your Snowden example, if you're a TLA and find yourself completely owned by an outside contractor making low six fiures, then you've utterly failed and managing insider risk.
Think carefully if you care how someone stole the data. Or you care more about how likely it is to steal the data. I can argue that penalizing companies for being blackmailed in fact encourages less cooperation with authorities, which encourages more people to blackmail them
No necessary so as the company may just pay for insurance from future attacks. Granted insurance companies then will demand some compliance with security check lists, but this feedback loop is very slow.
Then the incentive is to avoid becoming a victim to ransomware in the first place by making it more cost effective to hire decent security than to take the risk and end up getting targeted.
Sounds good to me. Why should companies be allowed to save money by exposing our personal data and then pay for it by funding terrorist organizations, organized crime, and totalitarian governments?
Why do you assume that it is personal data that are at risk? Maybe it is your new super-duper tech that hackers will threaten to leak to the rest of the world?
That's the first-order effect. The second-order effect is companies paying less to ransomware creators, making it a worse business to be in. Over time this should result in less business pain.
This will ultimately just create a larger market for ransomware insurance. Insurance premiums are likely the lowest cost compared to 1) paying the fines or 2) actually improving security.
Most businesses already have some form of insurance covering their liability in these situations and those will just price in whatever fines might need to be paid.
> This will ultimately just create a larger market for ransomware insurance.
CEO of Swiss Re to said this[1]:
> He observed that the cyber insurance market is currently worth around $5.5 billion in premium, compared to “gigantic” yearly losses that extend into the hundreds of billions of dollars.
“There’s a cyber market that’s very tiny compared to the total exposure,” he told CNBC. “It’s going to grow but only a tiny minority of cyber is actually insured.”
“And I would actually argue that overall the problem is so big it’s not insurable,” Mumenthaler continued. It’s just too big. Because there are events that can happen at the same time everywhere that are much more worrying than what you just saw.”
Option:
3. Pay your IT/Security department or hire a consultant
Pay for licenses and updates of software and hardware
Don't expect job of 5 people to be done by 1
Don't let bunch of trainees run your infra
Government should make companies pay even more so other companies understand what the proper way to "not getting ransomed" is or spend money finding out. Instead of money going god knows where to finance god knows what.
SolarWinds was blaming some intern for a bad password, if it would be up to me, I would close down whole company for such utter bullshit. I understand at their scale it is still possible to have some loose ends but no one was doing any audits, no one was doing any security awareness? I bet you could blame at least 10 managers there for not even thinking about security and not some intern.
People really out here acting like all of Russia is on the sanction list.
Its like the head of Sbersbank and a few companies and a few individuals, and that's it.
There is practically no way for this to be a real rebuttal or conversation. Companies can pay ransoms, intermediaries can pay ransoms. There is no legal quagmire.
Why would you accept a pseudonymous cryptocurrency in a country you can't even get financial records from the fiat offramps, and use a pseudonym that matched your actual name on the OFAC list? Let alone just not being a person that is on the OFAC list. This is so improbable, the US Treasury can pound sand.
for anyone passing by: it is impossible to tell from blockchain analysis if anybody sent a payment to a particular Monero address, as neither sender, recipient or amount is stored in transaction data or onchain anywhere. Even client side, the data is limited.
Even if the US Treasury seized the recipient's wallet and had it open to look at all transaction history, Monero protocol doesn't tell you what address payments were received from, so the US Treasury would not be able to use their wallet and then compare it to US exchanges or other covered persons to say those people violated sanctions.
On the contrary, I do think Monero wallets show what address you sent to, so if they seized an exchange or a covered person's wallet they could see if they sent to that sanctioned address. But of course, the person on the OFAC list has infinite subaddresses to rotate to.
All the governments around the world have only seized a handful of wallets, so to me it seems like an improbable risk. Most of those seizures were only possible via user-error and non-chalant storage of these kinds of assets.
You have to go to individuals and force them to give a password to derive a private key. Without use of force, many governments don't have a legal power to force people to open things. With hacking even on-premise, there are still extremely high barriers per wallet which makes it basically impossible. With use of force they will still have a challenge with too high of a crowd and will still lack the legal rationale to do so.
And everyone can own this asset without the state knowing of it.
I think that's fair. At a previous job from years ago, we took a meeting with one of these ransom payment-facilitator companies. I got the impression they were probably legit and just trying to help companies who knew nothing about cryptocurrency quickly recover from attacks.
However, some percentage of these firms definitely are basically part of the ransom racket and essentially act as intermediaries for ransomers. And of course, who knows if my gut feeling of legitimacy in that one particular case was correct or not.
Apologies to people from China, Russia, Ukraine and so on on HN ahead of everything else: It would not be a bad idea to get countries that routinely shield bad actors and/or active engage in electronic warfare across the net to be blackholed as long as they don't cooperate in bringing the perps in these cases to justice.
People will get killed because of these actions, if it hasn't already happened.
Of course that works both ways: the countries on the other side of that divide would have to stop doing the same thing, to each other and to countries on the other side of the divide.
It's sort of an 'electronic curtain', the iron curtain of cold. China already erected one half of such a barrier, the GFW definitely reduces the chances of foreign hackers attacking Chinese infrastructure, it doesn't seem to do anything to keep attacks from China out of the rest of the world though.
So regardless of the origin of these hackers, I'm all for a bit more isolation until we've figured out how to deal with this problem, cross border digital crime is going to be (and already is) a real headache.
I just don't think this could work. If the money is good enough, I think hackers in blackholed countries would find ways to connect with the outside world, particularly if their government was inclined to help them do it.
Personally, I believe we're living through the events that will lead to the permanent fracturing of the Internet as we know it, along state or bloc lines.
I'm torn as to this being a pro or con for humanity.
I don't think this will happen. American conglomerates are too dependent on both Western and non-Western markets to push for a fracturing. It would mean a lot of work and duplicated effort to offer the same services across multiple Internets on top of destroying lots of supply chains (what do if vendor X only has an API on the American Internet but vendor Y operates off the Chinese Internet?). A lot of money lost for something that could be fixed by just investing in cybersecurity.
We may put you in jail for paying sanctioned criminals, but we will not tell you explicitly what constitutes a sanctioned crime, who those criminals are, or we can pull it right out of thin air
This way they evade the need to go to the legislature to institute a new class of ban list for them to run.
As an any "pull out of thin air" type privilege, it's a bad thing
It would be nice if the NSA had spent the last decade or two helping shore up cybersecurity, instead of creating, stockpiling, and accidentally leaking zero days to later get used in ransomware attacks.
I've thought exactly this thing for years. I'm afraid that you'd have to build a completely separate organization with a separate management line up to the top of the chart. and even then you'd spend most of your time protecting yourself from the NSA and it's brethren.
Imagine the city you live en being bombed daily by an enemy airforce. Then you discover (after losing your house) that the neighbour paid the attacking airforce to avoid his house.
A better analogy would be that this is like someone's business getting robbed, and being punished for paying the robber who flew overseas to ship it back to you. But still, this is different, more complex, and more nuanced.
Which is way more reasonable than it sounds. Put differently, it incentivizes companies to get their security in order. Keep in mind the arguably larger victim of these attacks is the public who may lose their own data or security which they have entrusted to these businesses.
We have an expectation of due diligence from firms. If you're a company that rents storage space and you keep your property unlocked and you lose all your customers stuff it's not just the thieves who are in trouble.
I'm surprised companies don't buy insurance against this. I could very easily see insurance companies offering a new product. Pay us $25,000 a year, for a 5 million liability. Or 50,000 a year for a 10 million dollar liability shield etc.
Insurance companies can then develop their own methods to better determine premiums for companies based on measures they take.
Companies can then decide how much risk to take in choosing not to invest in cyber security for their operations based on the cost of their premiums.
If this is an insurance option that currently exists, perhaps more companies will begin paying for it.
Cyber insurance is a real field but it's still very new so not a lot of executives know about it. I think in the future it will become a very popular industry. I could even see it becoming an option for private citizens, paying $x/mo so that when you get doxxed your insurance company will sue a bunch of websites on your behalf to get your data taken down. Right now this is an awfully tedious and high-effort process, but it's especially necessary when you have e.g. revenge porn or private messages threatened to be revealed by blackmailers.
Sometimes it can be covered by business owner's policy[1], but there are specifc policies covering cyber extortion, e.g. [2] [3]. As you suggested, they are generally paired with risk management.
Companies would just add the cost of the punishment into the cost of the ransom, re-evaluate the risk, and probably decide to just keep doing what they're doing.
The problem is not that companies are paying ransoms. The problem is that companies who operate infrastructures of national importance and who collect sensitive data about us are loosing control of said infrastructures and data. If paying ransoms is part of the discussion, we're already in a very sorry state. Legal action should be focused first-and-foremost on preventing that loss of control.
First we need to decide what is important enough that we should legally require companies to protect it. Certain data or services may require special licenses, depending on scale and importance.
Then we need to decide on how to evaluate whether or not the company has provided sufficient protection and what the punishment should be for failing to provide sufficient protection.
Then we need to establish an government organization of white-hat hackers who are charged with evaluating the protection measures implemented by companies - much like how a health inspector goes around evaluating the conditions of food service companies.
Why is that sanctioned sometimes means allowed and sometimes means disallowed.
You can say, an action was sanctioned, meaning it was approved by someone in power. You can say an action was sanctioned, meaning it was punished, presumably because it wasn't approved. And you can say unsanctioned to mean it wasn't approved, or to say it wasn't punished. What the hell, English?
English is my 2nd (or 3rd?) language, so pardon my understanding, but I think when its a noun, it can be aprooval or denial; but as a verb its always aprooval. Right?
My understanding is that it always means "approved." When someone says "sanctioning Russia" they're referring to the set of rules or punishments that have been approved by the US government. These rules or punishments didn't exist before, but now they do, hence they are "sanctioned."
I have some issue with the headline - the article discusses "facilitating" so it may in fact target money-transfer firms and banks.
That said, if these laws can target the victims of ransomware, this sounds self-defeating. Not only will companies continue to get hacked (as nowhere do I see any meaningful help in preventing "cybercrimes" or shoring up cybersecurity), but now there will be incentive to not report that a crime took place at all.
Put another way, if I have been a victim of ransomware and the only way to recover the data is to pay the ransom - should I :
A) report the crime and hope I can recover the data some other way?
B) pay the ransom, and report the crime and then suffer more fines
C) pay the ransom and tell nobody, allowing the crime to go unreported, but forgoing the risk of further punishment from the government
There is probably a way to help companies and maybe a national cybersecurity initiative may be of use here, but blaming/punishing then victim is not the way. Maybe preventing the payments is reasonable, but even then, it seems that prevention of the crime itself is the best medicine (as it is in most cases).
There’s probably a hell of a market opportunity for stagnant businesses to introduce the malware to themselves, ransom themselves, pay themselves, collect the insurance then launder the cryptocurrency.
If you're skilled enough to do that (and not get caught) and that ethically compromised, there are easier (and legal!) ways to make money.
Remember kids, the best way to rob a bank is to buy one.
I'm kind of on the fence here. I see the logic behind it but in many cases incidents will simply be swept under the rug and users will never find out that their data has been compromised.
Monero is often accepted (such as in the pipeline situation). It is an interesting usecase that the company could try to pay privately to avoid legal action themselves.
Another idea would be to require breaches to be made public.
And of course, the government coordinating with good companies a series of best practices and models, and working with MS, even Linux versions to help get the message out and implement good policy.
Like a 'tiered strategy' for home, small biz., mid biz. and 'high touch enterprise'.
Basically some kind of 'board' that exist to help train, coordinate and communicate the things that need to be done.
The miners can easily stop this, and they will if the users suggest even a tiny bit of infungible preference. I’m not sure that government can control this, although I’m sure they will try, but some authority may be needed to coordinate information sharing between victims and validators.
Not even the most die-hard freedom fighters will side with the dishonest and violent. Cryptocurrency will be the worst thing that ever happened to them.
> Not even the most die-hard freedom fighters will side with the dishonest and violent. Cryptocurrency will be the worst thing that ever happened to them.
The history of many nations has proven this to be untrue.
You’re probably right. But it seems in everybody’s interest not to force that alliance. For instance, look at Monero booming today after the treasury news. That’s a sign of who ordinary people are most afraid of. It would be nice for modern government to at least aspire to be more loved than feared.
If the USG isn't able to provide protection against ransomware attacks, what other choice is there? This is going to be a hard sell - especially when the USG continues to pay ransoms themselves. Oh they may call it other nonsense like "humanitarian aid" to try and save face, but they're ransoms. I'm not in favor of kicking the victim when they're down.
I wonder, if you could insure your business against ransomware attacks so that instead of paying out you just file a claim for whatever losses, then maybe you could concoct a scheme for insurance fraud by having hackers ransomware your business out and collecting the insurance money. Basically a 21st century version of burning down your business for insurance money.
There's pretty much no way to know if a bitcoin address belongs to someone from a sanctioned country or otherwise sanctioned, right?
So this effectively makes paying ransomware an activity with very high legal risk.
It will be interesting to see how that all plays out. It's hard to imagine the regulators didn't think of this... I wonder what they are thinking exactly.
The government doesn't need to prove a bitcoin address belongs to someone on the sanctioned list, the specific intermediaries used aren't important. Proving that you know you are paying someone on the sanction list is enough.
You misread my point. In fact, the article suggests they don't even need to prove that you know you are paying someone on the sanction list.
> In a pair of advisories, the Treasury’s Office of Foreign Assets Control and its Financial Crimes Enforcement Network warned that facilitators could be prosecuted even if they or the victims did not know that the hackers demanding the ransom were subject to U.S. sanctions.
Indeed, that means it is legally dangerous to pay anyone whose identity you don't know. That is what I'm saying, yes.
Maybe this new wave of ransomwares and the attention it's getting will finally force IT on a more quality driven path. Right now I see a lot of projects with small budgets sent to fast lane to finish asap, security be damn. Or project with big budget wasted on middle men paying scraps to interns sold as experts.
This looks like it could work, however what about the cases where people decide to pay, and end up in cahoots with the gang in order to keep them both out of trouble?
I've had a stance on this for awhile that paying ransoms to hackers is no different than cooperating with terrorists. Like others have mentioned here, this is a national security issue. CMV.
The article isn't about the companies whose data is held hostage, it's about consultants that sit in the middle and help those companies with paying ransoms. It's more a matter of those consultants being required to register as money transmitters.
Hard drives crash eventually. Other corruption events happen also, along with user error. Do these business go under when they get other non-ransomware data loss events? What is it about ransomware that is different than any other type of data loss event -- is it the fact that ransomware affects a wider footprint?
Ransomware typically targets all systems, including redundant systems, and backup systems used for recovery in the event of everything you've described.
> The person I was replying to took a phrase out of context
It wasn't out of context. That's just how you interpreted it.
> used it as an opportunity to advance an unrelated political agenda.
It's not unrelated. Here, let me enumerate the thread again and provide some context:
> > @zepto: Insure against the losses associated with an unpaid ransom.
You offered a suggestion to businesses to help protect against ransoms.
> > @thisisnico: A lot of times the losses result in the loss of the business entirely.
Someone suggested that the insurance can't cover all of the losses.
> > @piptastic: Not every business deserves to be in business
Someone else suggested that being "in business" isn't a right.
> > @ryan_j_naughton: No business "deserves" to be in business.
Indeed, another person agreed. They then offered an corollary opinion.
> > We should fight the rent seekers who believe they are entitled to their markets and use regulatory capture to maintain their position
Here's where you started throwing a hissy-fit about politics. The conversation has moved on from what you first talked about. There's nothing wrong with offering an opinion, nor about it being political, and it's very much in-context with the movement of the conversation; first from "businesses should insure themselves" to "businesses can lose everything" followed by "businesses don't have a right to exist" and finally an opinion: "we should fight businesses".
I'm reading that you're upset that the conversation has moved on because you're complaining that someone made a comment "out of context" and I disagree.
> if you can’t protect your data, then perhaps you don’t have the right to be in business
I fully agree.
> They didn’t agree. They added a decontextualized statement a bit like the one they were following
I read it as an agreement. I see you don't. You're clearly think it's an out-of-context opinion and I disagree about that.
> And then added a political statement that was a complete non-sequitur to the conversation.
It is, for sure, a political statement. But it's not a non-sequitur. Businesses that are uninsurable are arguably rent-seeking to stay in business.
> Why do you feel the need to defend these statements?
Different people have some very different opinions.
Because I can. Because I'm bored. Because I want to. Because it's not against any rule. Because I think you're wrong. Take your pick. Ultimately, your question here is rather out-of-context, unimportant, and doesn't add anything to the conversation.
Why do you feel the need to attack these statements? Why do you feel it's necessary to claim that an opinion is made out of context and doesn't contribute to the conversation? Why do you continue a long reply chain denying someone who disagrees with you? That's effectively what you've asked me.
> I'm reading that you're upset that the conversation has moved on because you're complaining that someone made a comment "out of context" and I disagree.
That’s you reading something in that isn’t there. There is no complaint.
>> They didn’t agree. They added a decontextualized statement a bit like the one they were following
> I read it as an agreement. I see you don't. You're clearly think it's an out-of-context opinion and I disagree about that.
Ok, but nothing supports this. It’s not a statement of agreement. You are welcome to read that into it if you like though.
> Businesses that are uninsurable are arguably rent-seeking to stay in business.
No. That clearly doesn’t follow.
>> Why do you feel the need to defend these statements?
Different people have some very different opinions.
> Because I can. Because I'm bored. Because I want to. Because it's not against any rule. Because I think you're wrong. Take your pick.
> Why do you feel the need to attack these statements?
There is no attack. I think you are imagining that there is one. That is why I asked why you felt the need to defend the statements.
> Why do you feel it's necessary to claim that an opinion is made out of context and doesn't contribute to the conversation? Why do you continue a long reply chain denying someone who disagrees with you? That's effectively what you've asked me.
Because I was curious about why you were defending the statements, a question you haven’t answered.
Responsible IT operations in the 20s, is optional in the same sense as OSHA compliance was optional in the 70s or SOX compliance was optional in the 00s.
Ransomware will target whichever is the dominant platform. If more companies switch to Linux it's not like there haven't been 0 days there before that could have been exploited.
IMHO, reducing it to "stop using windows!" is a crude reduction of the forces behind this.
The inherent difference in design philosophies of both operating systems will make it arguably difficult to target Linux in the same manner Windows is being targeted today.
It’s like Spectre all over again. Just as many of the CPU performance gains of the last 25 years turned out to be based on taking insecure shortcuts, perhaps we will find many of the economic gains of the information economy are founded on similarly insecure practices.
Maybe handling data at scale is unaffordable for most businesses, who rely on those shortcuts, and wouldn’t be profitable if they had to hire competent infosec staff.
I still like my idea of keeping it legal to pay the ransom, but you have to pay an equal amount as a fine/fee to the government. Deter the bad guys by driving the market price up (so in effect, the bad guys will collect less money because they can't ask for as much), and incentivize prevention (i.e. making systems more secure) by making it more costly to address after the fact.
It will be a long, long time before the marketplace evolves sufficient technological measures to guard against state-sanctioned/possibly-state-sponsored malicious actors operating with impunity in a lawless environment.