TBH the issues that I've seen with Azure are pretty shocking. Like "I don't believe anyone did a basic pentest against that system even once" levels of shocking. That's insane for a company the size of Microsoft.
These vulns are cross tenancy violations, which, again, is insane. That's as bad as it gets for the cloud.
> This incident demonstrates the evolving challenges of cybersecurity in the face of sophisticated attacks.
The insane thing is that some of these vulns are as easy to discover as just running nmap. I'm sort of shocked that people haven't run into them accidentally. Hardly sophisticated.
Remember when cosmosdb users could read other users databases? [0]
It was so boneheadedly stupid, it was like a sysadmin making all user directories readable by all users. Not sure how that would not be tested. And made me worry about what other vulnerabilities lurk in Azure.
The vulns from wiz were so bad I basically wrote off Azure that day. And not bad like 'impact is high', bad like 'did anyone in your massive security org actually look at this?'.
I get the general sense that Azure is a frankenstein of a bunch of services they had built before as well as some that were formerly internally used, and have since been published/scaled up for customers
Nothing seems to have been designed to naturally fit together, everything feels like its glued on
Microsoft certifies to the government that they can not protect against moderately skilled attackers via their Common Criteria certifications as seen on their security homepage [1][2]. These are the only security certifications they mention on that page (FIPS is also mentioned, but it is more of a crypto certification than a security certification). They specifically certify that they only conform to EAL1 [3][4] which only requires AVA_VAN.1 which is the ability to protect against attackers with a "Basic attack potential" [5]. The evaluation methodology required by AVA_VAN.1 can be seen on their certification report [6]. Effectively, they do a Google search for "Windows CVE" then they check that all the search results are patched. That's it. I am not joking. That is how they certify security. I am not joking. Seriously.
To be fair, Microsoft has historically certified to conform to EAL4 which requires AVA_VAN.3 which is the ability to protect against "Enhanced basic" attackers and includes a actual penetration test. However, despite decades of attempts, they have never once successfully certified any product against AVA_VAN.4 which is the ability to protect against "Moderate" attackers. They have literally never once been able to protect against "Moderate" attackers. And they have the certifications to prove that they absolutely, positively, can not.
It is the height of idiocy that anybody listens to or trusts anything that Microsoft says at all about security. The have literally certified that the height of their ability is abject incompetence and the total inability to protect against "moderately" skilled attackers. Until they can definitely prove and certify otherwise, their claims of security should be completely ignored as the useless trash that it is.
What exactly are moderate and advanced attackers? The documents cited only seem to go up to the "enhanced basic level" you mentioned which is penetration testing with public vulnerability search.
Of note are the Security Assurance Requirements (SAR) seen here [2] which discuss the certification process.
The Security Functional Requirements (SFR) seen here [3] are more focused on what you are certifying and the sorts of problems you are certifying to solve.
As seen in the SAR document [4], AVA_VAN.1 is "basic" with a public vulnerability search. AVA_VAN.2 is the same, but they do a independent pentest. AVA_VAN.3 is "enhanced basic", AVA_VAN.4 is "moderate", AVA_VAN.5 is "high". The EAL levels are listed here [5] and echo the same things I stated about their corresponding AVA_VAN assurance requirements. Historically, EAL4 (the highest level ever achieved by Microsoft for any product in any configuration) said something like: "protects against casual and inadvertent" attackers, so that should probably help you calibrate what "enhanced-basic" means.
As for what constitutes "high". You can see a OS certified against the SKPP [6] here [7] at EAL6+. A EAL6+ certification requires AVA_VAN.5 or equivalent. For the SKPP they use a equivalent, AVA_VLA_EXP.4, which requires the NSA to certify that it protects against the NSA (with full source code) as a proxy for a "high attack potential" actor.
Yes, that is a certification by the NSA certifying that the NSA can not hack the OS even with full source. This is the OS used on the F-35 and the certification was done to determine if it was acceptable to bet the entire US Air Force on the OS, so the NSA was not playing pretend in this case. That is what constitutes a "high attack potential" actor.
So the gap here is: "protects against casual and inadvertent attacks" vs. the literal NSA.
The funniest part is that literally everybody says they are being attacked by "nation-state" actors. If they really are being attacked by "nation-state" actors, then they should be using systems that have been proven and certified to protect against them, not systems proven and certified to protect against your roommate in college.
As an anecdote, I once sent a vulnerability report to Microsoft that it was possible to get a certificate in Azure for a domain you didn’t control and have it validated with a third domain that was attacker controlled.
They rejected it on the basis that they do not investigate any vulnerabilities that require man-in-the-middle to exploit.
>That's insane for a company the size of Microsoft.
Why? Why does company size have anything at all to do with the security of their product?
Microsoft has been making shoddy, horribly insecure products for decades now. Do you need me to list all the email viruses and other such things that were running rampant in the 90s and 2000s?
What's truly insane is that people keep expecting Microsoft to do better, and then being disappointed when they don't. The common saying is "the definition of insanity is doing the same thing and expecting a different result", and that's exactly what most people do in regards to Microsoft and product quality or security.
Size is correlated with resources. Microsoft should have the resources to throw at this problem.
Unfortunately the Microsoft approach to software engineering is to set a bunch of monkeys loose on a room full of keyboards, rather than to spend their considerable resources vying for top talent.
(Obviously they do have some good engineers, no offense if you're one of them, it's simply that from my sample size of 5, 80% were just warm bodies with email access.)
Of course they have the resources. What they don't have is the incentives to do better. Just like every other big company that has a breach, they are not punished when they ignore basic security and they are not rewarded for paying attention to it. In fact they are punished by Wall Street for paying attention to it, because that costs money.
As long as all they have to do is say "Whoops! We're sorry" when a breach happens, we should expect breaches to be the norm for all big companies.
> Of course they have the resources. What they don't have is the incentives to do better.
Off-topic: this is exactly the same reason that Google offers up ads for scams and ads containing or linking to malware on their own advertising network.
The only way to get to the size of these companies is to scale what can be scaled, and damn the rest until the pigeons come home to roost. Which, if they play their political-donation cards right, won't happen. And if it does, jeez they've made bank in the meantime, and will Goodhart the shit out of whatever watered-down legislational changes may make their way through.
They definitely have the incentives. This conversation is a good example of that. Azure is attempting to compete in an aggressively competitive market and their customers absolutely care about security.
Yep they could be hiring independent hackers to test their security 24/7 and they just simpley "don't" . It's like me doing a security audit of my own software, that would be just dumb, so you have pen test dudes who poke at it.
In addition to that, security should probably be a consideration at the design phase of any feature or product. As in, an internal team should be vetting design and critical areas of the code itself. But the culture there only cares about "partner requirements". Security, tech debt, UX are simply glossed over.
> "the definition of insanity is doing the same thing and expecting a different result"
This is a nitpick, but it's often completely reasonable to expect a different result by doing the same thing. Nothing against what you wrote, I just hate this saying so much. It's usually wrong to assume things are context-free. Most of life is defined by routine. Erosion carved out the surface of the earth, breathing and eating keeps you alive, etc.
At some point the definition of "same" comes under scrutiny and you will find that it's impossible to truly do the exact same thing more than once, and yet even if it was possible you might still get a different result because other things you had no control over changed. Entropy makes it all just so.
Well, I think the saying kinda implies that the conditions are the same. Obviously, common sense dictates that doing the same thing when conditions are radically different can yield different results. (For instance, taking a deep breath is usually good for you. But doing so underwater is obviously not.)
Of course, this then leads to an argument over how the conditions differ in the example where the saying is used, like with MS here.
Well, practically speaking the conditions are never the same in the course of action. If you are trying to break something, for example, at first you don't succeed but in the end, after repeating the same action again and again, you finally do.
Google has the money to hire customer service representatives for customers to call when they have a problem, but they don't do so, because it's not a priority for them. It's the same with Microsoft and security.
I don't understand your objective in this conversation. Are you trying to argue it's okay that Microsoft is incompetent, because Google also has bad customer service? Or that we shouldn't criticize Microsoft's awful security practices because it's "not their priority"?
Honestly I don't understand how any company can rely on Microsoft for security, and how they can get away with blaming their customers so often.
You can literally kerberoast Azure AD by default, and that's a known to be used in the wild attack vector since 2014. In a cloud service. Today.
Spammers literally rent Azure VMs because they know that the IP range of azure is not processed by Outlooks/Exchanges email filters.
So often researchers did the right thing and disclosed everything correctly just to get Microsoft to say "oh yeah here is another RCE, but we don't give a damn. Oh, and there is no patch either."
It's just so ridiculous.
There's even unfixed RCEs of VBA from the Office 2013 days which still work, because the mentality of never touch a running software creeped into how Office is built (which is: have a literal copy of all outdated Office versions for the sake of compatibility).
And then people wonder why "Hackers" always say that Microsoft is insecure and why ISO27001 is now a google dork to find easy to hack victims.
My completely unqualified outsider perspective is that they're a slightly different phenomenon. The old school Microsoft security issues stemmed from security being more or less an afterthought or with comically bad default options. Many years and many billions of dollars later those systems are largely OK. But of course onprem systems are often not patched - but that's arguably not their fault (although Microsoft have made patching much harder than it needs to be, possibly to drive SCCM sales I'm not sure).
Azure...is a whole other thing. Every single service I look at feels like it's been developed two or three times, and renamed at least once...with varying degrees of backwards compatibility/interoperability/deprecation. The security interactions must be a nightmare to properly test for MS. On top of this it feels like there's multiple strategies being enacted simultaneously which makes it hard as an admin user to know you're doing the right thing in general.
Other cloud providers undoubtedly have some of the same issues with complexity/change but AzureAD (now renamed to Entra ID for whatever reason) is being used to manage core things like user accounts and mailboxes for organisations of all shapes and sizes (often without internal security teams).
There's also an unpleasant feeling that security is an upsell opportunity. It's a bad look.
Windows 11 has also been plagued by bugs in the Microsoft Defender UI until recently, and it has been warning users about security risks even when they block Microsoft telemetry. This shows that Microsoft is not taking security seriously.
It's hard to even describe how historically bad they've been; they normalized and encouraged unsigned binaries, essentially viruses and worms, and were NEVER punished.
What should the punishment for Arch, Debian and Fedora be, then? Not only are unsigned binaries normal there, but as far as I know, the infrastructure doesn't even exist to check the signature of a binary on execution.
The repositories support signing so binaries that are sourced from these repositories are verified by the associated signatures during installation. If you trust the installation, would you include the execution in that same trust? Windows does not include signed repositories (or any that I know of) by default. (I could be wrong about that but if it does exist, the software collection is nowhere as vast as the distro repositories.)
Windows does not generally have the concept of a 'repository' (WinGet excluded, but that's fairly new). The Microsoft Store is the closest thing to a repo, and those are signed.
Everything coming from Windows Update is also signed by a public CA.
A runtime signature check is far more secure than an install-time signature check. On Linux you can swap a binary with an evil version using one of a million local privilege escalations available and nobody would ever know unless you have additional tripwire-style tooling set up.
None, but the reasons are more legal than technical.
None of those projects have come close to Microsoft in terms of creating a product, for pay, that average people reasonably rely on.
Again, using a legal definition of reasonably; a widely used and paid for product can be held to something like a "merchantability" standard - especially if Microsoft has ever claimed their product to be safe and secure, which I'm fairly certain they have.
i was under the impression that they had made massive investments and really turned that around ... or is that ironically only windows? (or just a mistaken impression?)
microsoft has always had a culture of "just make it work yesterday" that led to shoddy code. they spent a lot of money, some of it on PR and some of it on real work, polishing up some of windows's more egregious problems a couple decades ago. but the attitude is deeply ingrained in the leadership. i also suspect that a lot of cost-cutting demanded by the business side gets implemented by off-shoring work to the cheapest possible workers with the minimum experience necessary to push a product out the door, but im not as familiar with microsoft these days.
There was the famous "Bill Gates memo" early in the century that stopped work on pretty much everything except security, citing Windows' atrocious reputation as an existential risk to Microsoft. It lead to a big improvement, including UAC and driver signing.
Also it's worth differentiating between Azure and the rest of the Microsoft silos. The Xbox isn't being cracked on the regular. The microvirt for applications and browsers in desktop Windows seems very well thought out. It seems as though the Azure team are pretty awful, though.
Our company signed a huge 5 year deal with microsoft for azure services just today. We evaluated all major providers for more than a year, Microsoft offered the best terms by far, and we were able to meet all of our needs using it just as well as with AWS or GCP.
Honest question, why was the decision wrong? It does not matter to me eather way, I’m just an engineer that was brought in to consult on some aspects of the contract.
I'm extremely interested to know what terms AWS or GCP could not provide you that Azure could. That would be an extremely enlightening post.
As someone who has used Azure, AWS, and GCP since their inception but mainly Azure for very unfortunate career choices -- I can assure you Azure has the most problems of any sort you can imagine.
Your MS rep (if you're big enough to have contact with one) will most assuredly be using their checkbook for you guys or your client. I'm not being melodramatic. This isn't even an original opinion or experience (go ahead, look around HN). And it certainly isn't some cultural "M$" backlash as another poster put it.
If this post seems editorial it's because after nearly a decade of this nonsense I'm just simply exasperated, and it greatly confuses me when people who are within the MS ecosystem don't know how bad they have it with Azure. It's basically unacceptable that MS gets away with it.
From AKS to WAF to Cosmos DB take your pick. You will run into issues and you will run into them continually.
Personally with my experience, Azure is not something I could ethically ever recommend to anyone.
It's pretty much best of the bad choice when it comes to cloud providers.
One of fun things we had with our MS experience was writing code that should work according to docs, returned nonsensical errors (not documented, generic kind of error) from their API, only for that to magically start working next day...
Even the basic stuff appears to be shoddy in places, for example I received notification about meeting that I was added to 5 days ago about 3h before the meeting.... when the meeting author edited the meeting to add next person.
Also trying to find what made e-mail land into spam in cloud exchange is near-impossibility... and whitelisting them in global rules just doesn't fucking work for no good reason. It's utter mess.
How is that possible? I'm guessing you mean the best terms for Windows machines and Windows-based services. As soon as you exit the Microsoft realm, the prices plummet...
You can solve business problems within the Microsoft realm at commercially reasonable costs. That’s all that matters to the business. Easy to get support, easy to find humans to run it. Like McDonald’s, Microsoft sells consistency. Is an alternate solution cheaper? Would have to compare total cost of ownership to come to a firm conclusion.
Because some people prefer to bash things with open source rocks instead of using a proprietary hammer.
There’s an entire subculture of the IT world that spells it “Micro$oft” and refuses to acknowledge its very existence, or a valid option.
I once saw a post about how some Linux tool had support for “every major LDAP directory system” and did not list Active Directory! It’s like… dude… something like 99% of deployed LDAP systems is AD. The rest is a rounding error.
This is an uncharitable take which presents both a strawman and a false dichotomy. The reasons for avoiding Microsoft are many. And often philosophical in nature, whether with respect to business practices or the sanity of one's work environment.
You can choose to avoid Microsoft, that's perfectly fine. I do however find it amusing to see the very existence of Microsoft casually skipped over, as if they weren't one of the world's biggest corporations.
Imagine if someone listed "modern UNIX-like operating systems" and the list went something like: NetBSD, OpenBSD, AIX, OpenSolaris, and then went through dozens like that into ever more obscure things nobody has ever heard of, but skipped Linux like it didn't even exist. Just some Finnish guy's hobby project, not really worth discussing, right?
It gets to the point where it's absurd.
As a real example, someone made a printable SVG/PDF poster of "big data" and "data science" companies and technologies. They were listing dinky little startups that had a total value smaller than the annual cost of an individual Azure storage blob container that I deleted to save money. That was an "oops" by someone that the customer didn't even notice.
You're still strawmanning by lumping together a huge amount of people with different motivations under one homogeneous group.
> It gets to the point where it's absurd.
All of the examples you provide don't paint some cohesive picture, they come across as random, haphazard strokes with no form or meaning. Your original comment attempted to answer a question by ridiculing an entire demographic of people who care much more about human rights than you realize.
Okay, let me paint you a picture: I will never go as a tourist to certain countries. I don't want to support their economies with my dollars because I think they're morally bankrupt places. I also value my own safety, freedom, and the like. Think Russia, Saudi Arabia, North Korea, etc...
That doesn't mean I'll just ignore their existence or forget to list them as places in the world.
It's a uniquely Linux-fanboy thing to just pretend Microsoft doesn't exist, or that it's not even worth including in a list.
No, your comment is still coming across as close-minded and judgemental, and you are likely engaging in sharpshooter's fallacy as well by examining a few cherry-picked examples to support this weird crusade you seem to have against "Linux-fanboys".
Overall, this take feels immature and insensitive to the ideals that drive Linux users.
One gains a lot of benefits by building with libre or permissive software. Proprietary software comes with licensing fees and sometimes complex DRM systems that need maintenance. There are totally situations where both are the better tool for the job, though generally the proprietary "solutions" are often just short-term "let's get this done now" stuff, that then later has an increased cost. Leaning on a proprietary solution also means you're downstream of any decision the makers of the tool make, including choosing to nuke your (and/or your business's) use case.
I can't comment on the LDAP thing on a technical level, but if the focus is on libre software, why is it apropos to list proprietary things that it's compatible with? Software that boasts this compatibility doesn't always keep it, or sometimes loses it due to deliberate action from the proprietary party. I don't think I'd list Active Directory either. It's not something one can vouch for unless they're literally Microsoft.
Unless your business is in the line of user authentication, what business advantage would one get by developing their own LDAP server? It would be an incredibly inefficient use of resources if you're a 100 person shop only wanting unified logins.
No, it was in response to the correct comment but I should elaborate.
> Leaning on a proprietary solution also means you're downstream of any decision the makers of the tool make, including choosing to nuke your (and/or your business's) use case.
My understanding is that OP is advocating free/libre software protects a business from changes in software that could affect the business' use case.
This risk is present in both proprietary and free/libre software. Maintainers may remove features that some users find critical. If a feature is removed and your business requires the feature, you now need to maintain a fork of the software or contribute development time upstream.
You're right, by trusting any outside source for a dependency, you're still dependent. But as you noted, there's the option of adopting the software yourself and injecting it as vendored or something similar. Proprietary doesn't even have that.
Depending on the project, it may move slow enough for the dependency to be slow to update and it be fine. Depends on what you're doing.
At least in the case you mentioned, I can revert to an older version and freeze it there until my business can sort out the way forward.
Ideally, one chooses dependencies that are easy to replace and reasonable to maintain for a bit, if needed. Or if you're lucky, no dependencies at all!
I think that subculture you're referring to existed in 1990s in places like /.,, but I never encountered it during 2010s or 2020s.
(I think it is directly related to the lack of stranglehold which MS had over much of daily computing in 1990s. Apple, Linux, and mobile platforms forced it to compete and innovate more seriously, winning back quite some respect.)
Microsoft products are often criticized for being expensive to use on public clouds other than Azure. In terms of "bundling", Microsoft has not changed its ways.
I was having a twitter argument the other day with someone who maintained that they never really come across any Microsoft systems in their work and who would be crazy enough to use Windows Server. I've worked in orgs from 100 people to 10,000 and it's always Active Directory and MS all the way through with odd few Ubuntu or RHEL servers for a internal application.
The only Windows systems I've seen in over 15 years were to run Active Directory over 10 year ago (and not since), my personal gaming desktop (which no longer runs Windows) and GitHub runners for cross compilation.
There's plenty of Microsoft out there, but there's an entire, thriving universe where Microsoft is completely irrelevant.
In a corporate setting, I've experienced many problems with Microsoft and their services.
Just off the top of my head, we had a ms forms, a couple actually, that as soon as they hit 50k entries, broke in weird ways. Not accepting new entrys, dropping entries, etc..
That's just one example I immediately recall where I could code up a better solution in a short amount of time, that just shouldn't be a thing with a large cloud/service provider like Ms.
We're a small B2B company. For VMs, AWS doesn't have a data center in our country (Norway) which is important due to data storage regulations, nobody sane picks Google for anything they need to rely on... so it's Azure or some local players.
We've tried the latter, twice, they got bought and support quality tanked both times. We don't want to manage hardware yet, so...
Also, customers want MSSQL, and managed MSSQL is nice since it's trivial to scale up and down with the needs of the customer.
Started using their other services like Service Bus, Blob Storage and Application Insights. We're using .Net so integration is quite simple.
It wasn't my call, but there weren't that many great alternatives from what I can see.
I worked at a place that used Azure and was told "When Microsoft has a problem, customers understand, its like a natural disaster, can't be avoided. When AWS has a problem customers say 'Huh, what's AWS, fix it now'"
I can't remember any outages in the few years I worked there though.
There are plenty of companies that effectively cannot use AWS because it is owned by Amazon, and either they or one of their customers is in direct competition with Amazon. For example, if you are a vendor for Walmart, they do not want their data ever landing on AWS.
Musk was fired for digging in and spending months insisting a working MVP written in Java on Solaris be rewritten in Classic ASP on Windows because he knew the latter and not the former, not because of any intrinsic failing of the latter.
I can tell you why places that previous consulting contracts I've taken used it:
1. Its cheaper
2. They're already a C# shop so it just "feels right"
Thats about it, really. That appears to be the sum of the thought that went into it. And that also explains why they had to bring myself (and others) in as very expensive consultants to un-fuck various systems...
We found a bug in azure today where our security key was too big and their auth stack failed. They will try to push a fix in two weeks. They're doing their best.
Sadly for a lot of these companies it's more of a "business" decision than a technical one. Having said that I will not work for a company in the future that has their primary infrastructure in Azure.
Large enterprises using MS Office suite eventually rolled into O365. Sharepoint, same. MS cloud offerings were simply evolutionary. Enterprises simply had to shift from capex to opex. This is why they stayed.
Amit Yoran, right again. He was once head of computer security for Homeland Security, and become unpopular for pointing out that Microsoft was the problem.
Honestly, no one really cares...people will just continue to consume their products as usual.
So long as Microsoft has something interesting to offer consumers and business, security will be the last thing people care about.
Microsoft has had absolutely terrible security since I've had a computer and it's been heavily criticized the whole time. None of this has stopped their meteoric rise to extreme profitability.
I've realized this about security as well. Unless insurance companies demand higher premiums for choosing Azure, or Azure fails to get certain certifications, then all is well as far as decision-makers are concerned. And this is not even that irrational. Most security issues go unexploited for most companies. Caring about security is IMO a bad business move on average.
After the last Circle CI breach, we were all really annoyed about the way it was handled, IMO It was dishonest and lacked the required transparency for such a breech, we were all quite annoyed and ready to move on, but we've just plugged on using the product like nothing happened.
GitHub was similar, they published their freaking private keys accidently, which should raise some major red flags, we're all just going about our business with GitHub.
I was down voted hard and fast for my original comment but I'd like to see someone actually disprove my point. I've been in this game way too long to know that almost no security issue matters in the eyes of consumers so long as the company offers decent products and has a great marketing team.
I'm ideologically opposed to this lack of inaction and I think you're right, you'd need some type of financial disincentive to change people...
Social media itself is a type of privacy breech and people actually openly engage with it.
Isn't the issue that, if you switch away from a service after an incident, it doesn't necessarily mean that the alternative is going to necessarily be better? GitHub clearly did things poorly, but is Gitlab or Bitbucket better? Or have they just not happened to have messed up publicly yet/recently? Is swapping away just playing musical chairs at high cost to your dev team having to constantly migrate until you've run out of services to migrate to?
I don't say this to imply that everyone is equally bad, but to question if is obviously as illogical as you seem to think it is to stick with a company that's had a major security debacle. What certification can you get from another CI vendor about their security procedures that CircleCI wouldn't have given you 2 years ago, or your internal Jenkins team wouldn't have offered?
This is sort of what I mean. The fact that one vendor is bad at security has no particular bearing on if their competition is better, or if there's a more secure option you could run internally.
Sure, we've learned that Azure is less secure than we may have thought last week, but is AWS or GCP better? Or have they just not been uncovered for their issues yet? Or maybe they had their big security issue last year. Once everyone's had a big problem, what do you do? Make up some scoring system and keep re-migrating your company to whoever currently has the lowest "security mistakes" score?
Take a lesson from the Lock Picking Lawyer. You don't need three impervious locks, two AirTags, and a Klaxon alarm on your bicycle, you only need a stronger lock than everyone else parked next to you.
And when a bear attacks you and your buddy, you don't need to outrun the bear, you just need to outrun your buddy.
On second thought, I'm not sure any of this philosophically applies to Cybersecurity, given the low cost of entry, stealth and anonymity, and the ability to mount massively parallel, unattended attacks.
I think the bigger difference is between targets: the US government is going to be targeted by a ton of folks who don't care about me or you or most folks on HN, even if US gov security is tougher than a random individual's.
Something that bothers me a bit more about this, I feel like I have not really heard about many of these issues while thinking that I have been doing a decent job of staying on top of things like this by paying attention to hacker news. Does Azure things just not filter up as much as I would expect or did I just happen to miss them?
It isn't like Azure is just ignored in the industry, time and time again I see it billed as the "non amazon aws" for companies that don't want to support AWS due to Amazon.
I feel like I hear more about GCloud issues than I do Azure issues which is concerning given how little GCloud is used even compared to Azure.
The story of the stolen signing keys did have trouble picking up the traction it deserved early on, but eventually got multiple highly upvoted submissions:
GCloud is probably used MORE than Azure. Microsoft has a lot of stuff that's not cloud in their revenue to make it look bigger. Leaked numbers show GCloud revenue are actually about even with Azure.
This doesn't come close to the actual market share you'll find in the likes of Gartner or others. The cloud landscape has long been AWS, Azure, and GCloud as a distant third.
Yes, and it amounts to a puff piece for GCloud. "We don't think we're as far behind as everyone else does" is not a compelling argument. Google is only too happy to believe Gartner rankings when it's in their favor [1].
> The senator went on to pin blame on Microsoft for the recent mass breach of the Departments of State and Commerce and the other Azure customers. Specific failings, Wyden said, included Microsoft having “a single skeleton key that, when inevitably stolen, could be used to forge access to different customers’ private communications.”
> "Microsoft’s engineers should never have deployed systems that violated such basic cybersecurity principles"
I hope someone quotes this line back when they inevitably introduce a 'we want a backdoor to encryption' legislation.
Many years ago in 2003(?) I interviewed for the role of Security Architect for the Office group (the guys who made Excel, Word, Power Point and (I think) Access.) My resume was reasonably good: I had spent the early 90s doing cryptography at Nortel, RSA (back when they were an engineering company) and Certicom. But I was a software engineer and in the late 90s doing Security Architecture for IBM and "several government agencies you might have heard of." I was REALLY ready to leave DHS at the time so was kind of motivated to bring what I learned about writing secure software to industry.
About this time there was a "security stand-down" going down at MSFT in part because several federal customers LITERALLY had to solicit an ACT OF CONGRESS in order to continue to use Win2K (or an early version of XP) with all it's known security flaws. Do not ask me about the version of Win2K in nuclear submarines. (Really. Don't ask me. That was someone else's project. I really don't know anything about it other than the rumors that were swirling around BlackHat.)
So here I am, coming in as some guy who's hip to secure software development and tools and how to convince devs to do the right thing re: security even though they're under a deadline. My third interview of the day was this guy who supposedly wrote Excel and was the "third highest ranking coder in all of MSFT" (not Simonyi, I would have recognized him.) And his first question was "So... how's your QA skills?" This isn't what I'm thinking I'm interviewing for, so I say "Pardon?" and he replies...
"This security thing is bullshit. Bill's going to eventually realize it's bullshit and in a couple months we'll go back to writing software the same way we used to. So I'm going to have to find a job for you and I'm thinking QA; that's the same thing as security."
I did not get that job.
I believe Michael Howard or Dave Leblanc got it. They went on to write a pretty decent book about secure product development and if you're a microsoft shop and have heard of the Secure Development Lifecycle, it's largely because of Michael and Dave.
(Don't worry, I was fine. I went on to work at Handspring and PalmSource and a bunch of enterprisey dev shops that were hip to the idea of developing secure code. And my life was probably filled with fewer headaches than anyone at MSFT.)
But... I remembered that interaction. Microsoft keeps saying "oh yeah! we're big on security!" And in many ways they are. MSVC (or DevStudio or .NET whizbang or whatever they call it now) have several very cool fuzzing and analysis tools. I've heard the Azure group is better about security than they were, though that's rather a low bar. I feel for them since they have a metric boat-load of legacy code and a development methodology that sort of guarantees failure.
They are also the strangest and most conceited group of developers I've met (with the possible exception of Amazon or Facebook or Netflix.) Come to think of it... what the heck is it about these FAANG companies? I bet I'm just meeting the duds. There have GOT to be decent developers in there somewhere.
They're all HUGE dev organizations and I appreciate how difficult it is to get that many developers pointing in the same direction at the same time. But at the end of the day, MSFT has a culture that really doesn't care about security. Or at least that's my take on it. I'm sure there are plenty of places in Redmond where people care about writing code that isn't buggy or vulnerable. But it's 20 years later and it still hasn't spread far enough.
> Come to think of it... what the heck is it about these FAANG companies?
Mostly: hiring people for perceived "talent" over actual engineering skills (especially engineering soft skills.)
Imagine interviewers highlighting someone's CV because they've won Putnam math competitions, while round-filing some other CVs because "they write Haskell/Erlang/Rust/etc on some hobby project, and so they might try to push for it at work and then burn out when they find out we won't do it here."
Now imagine the people hired by a process like that, going on to hire other people, and so on.
> I bet I'm just meeting the duds.
That too. In a big org, nobody with "real shit to do" is going to spend time interviewing. They're going to push that off on someone whose time can be wasted. Which means company culture gets decided by a bunch of people whose time isn't worth anything...
Amazon has credit cards, they are liable for billions if that system is broke into. Facebook and google dont have that, but they are still directly the target of hackers as part of their core business. Excel gets hacked, but the victim is not Microsoft.
You think having consumer credit card numbers in the database somehow is a big enough motivation to keep better security. You may want to consider overwhelming evidences of various consumer facing companies - Target, Home Depot, Heartland Payment System, etc - who had gross and basic breach of the credit card numbers.
They don't issue credit cards, to my knowledge anyway. Given different around the world, banking laws I doubt they ever will, easier to let a bank deal with that.
Listening to the cybersecurity person interviewed here play down the significance, one is left to possibly believe cybersecurity folks rely on Microsoft to keep them employed.
According to this podcast, the only reason the government discovered this breach is because they were paying Microsoft for the "privilege" to see who was accessing their email. Most customers were not paying thus would never have discovered similar unwanted access.
If charging for this transparency is a "business model", as the podcast suggests, and there were only a relatively small number of "customers", it really makes one wonder. How much money were they making from this "business model".
Wonder how this headline and article jives with Azure achieving FedRAMP statuses, DoD IL5, and other security certifications?
If Azure is as bad as this article makes them sound, does that mean most major security certifications are also as pointless they look from the outside? Like the pointless ISO 9001 certification - which only states "we have a process; here's the process; we follow the process; we don't deviate from the process"?
As someone who's worked on obtaining Common Criteria and FIPS certifications, I fully believe security certifications have little to do with security. Especially if you don't carefully read the definition of Security Target therein; that part is very much like the arbitrary process in your ISO 9001 mention.
How much of Azure infrastructure is managed by Windows? Are they using their own operating systems, hardware devices, drivers, firmware blobs and tools, or are they struggling like everyone else to glue all that FOSS together securely?
I can't talk about what I saw while working there, but let's just say that I have a nervous tic when someone tells we have a team for security so you need not worry, and when can you have it done by
Yes. If your boss tells you that security isn't your concern as an IC, because other (presumably more specialized) minds are going to be reviewing everything later anyway, try to get transferred.
It's not malicious, but the effect is sort of like wearing a blindfold and one hand juggling, throwing things in the air, and presuming that a right hand (a) exists and (b) is not currently holding several other balls.
Everyone was super lovely and awesome and smart, but there is (for whatever reason) an attitude that is one of those things that is handled/reviewed by a special team, like accessibility, or HR, or product design, or backups.
The thing is, we did have a great security team, it's just that expecting them to have local knowledge of every single line of code is obviously unfair to them, and the idea that you can "do security" on any product just by applying a fixed set of rules somewhere way down the pipeline is to grossly misunderstand how vulns happen and work.
Every single IC can and must be on the lookout for vulns. They're vulns. They are literally made out of eyeballs pointed elsewhere.
Security is like hygiene: everyone has to wash their own hands after they go, it makes no sense to think that everyone can save time by having a Department of Hand Sanitization.
That’s unfortunate. When I was on windows phone in the late 2000s, every new feature got a round of threat modeling. Each team (pm/test/dev) were responsible for building the threat model themselves, but the threat model was reviewed by another security team.
What the hell happened? Is annual security training still a thing?
There's still annual security training, but it's tapas-model. So basically you have to take N courses by <date>. If you're interested in lockpicking, but your job is in the cloud, you could still meet the requirement by watching a video about lockpicking. (If such a video was offered.) No one follows up, except to ensure that you took enough courses by the given date.
Note: My time at this company overlapped heavily with Covid WFH, so things may have been looser/weirder than normal.
And regarding threat modelling: I literally have no idea; I was just an IC. All I know is, there were several occasions where delivery cadence was selected for, and in my opinion, on at least some of those occasions, security should have been selected for instead.
One charitable explanation is that they'd just threat-modelled my part of the product and decided there weren't any threats there, so there was no need to ease up on the gas pedal. But some threats are subtle, and where you least expect them.
These vulns are cross tenancy violations, which, again, is insane. That's as bad as it gets for the cloud.
> This incident demonstrates the evolving challenges of cybersecurity in the face of sophisticated attacks.
The insane thing is that some of these vulns are as easy to discover as just running nmap. I'm sort of shocked that people haven't run into them accidentally. Hardly sophisticated.
I'm not trusting Azure with shit.