Anecdote: Out of every bank and financial institution I have ever tried hacking (ethically, as part of bug bounty programs) Goldman Sachs is hands down, without a doubt, the most secure externally. By a long shot. They have what basically amounts to a central authentication service that 95% of their public facing IP’s resolve to. Their sub domains are locked down, they have a reasonably good patch schedule, they swiftly denylist your IP after running light scanners - it’s not a joke. I challenge you to find a vulnerability - when you do - get some money for it: https://hackerone.com/goldmansachs
I consulted at GS for a public web project and their security team were not only smart, but very well integrated into the dev process. They had a dedicated security team who would do routine code reviews, pen tests and the like. If they had specific requirements like adding captcha or barring IPs, they would put them in our backlog fully groomed and prioritized. They were very thorough but not iron-fisted gatekeepers.
One of my friends got an internship doing dev there in like 1999/2000. They were already using 2FA (with a chunky but functional hardware dongle that had a small 8-segment display that updated every minute or two) to secure SSH access. Even with that, there were very tight limitations on what could be accessed at all over the network. I'm slightly impressed if I see an org that has a 2FA setup half that good now twenty years later (there are soooo many that don't).
I remember those dongles - and they weren't cheap. We had one for a sensitive part of the business (which probably was counter-productive because it got passed around like a potato).
Former GS-employee here: The relevance of "gs.com" to the business of Goldman Sachs is pretty much zero. When I worked there in 2010 I was told somewhat tongue-in-cheek-ishly that up until the 2008 financial crisis, the telephone number for press inquiries went to a PR firm that was under strict orders to never answer any inquiries of any kind and never to engage the press for any reason whatsoever. This was the extent of their PR activity. The extent of their marketing & advertising was even less than that. GS.COM is there to serve that non-function within the business.
If you do business with Goldman Sachs or if they are interested in doing business with you, you already have communication channels with them, and there's nothing that GS.COM could do to augment that in any way that is in any way meaningful.
I'd be willing to bet that even a complete takeover of the webhosting behind GS.COM wouldn't get an attacker an iota closer to taking over any systems that actually matter to them.
I remember what it was like to get approval there to make them open a port in a firewall so we could send e-mail from servers co-located at stock exchanges back to ourselves in the office. ...getting that signoff was a serious ordeal because the co-location networks operated by the stock exchanges are treated by them as "hostile", and they fought nail-and-teeth for keeping their firewalls as tight as they could possibly be. Asking them to loosen their security on anything internet-facing by even just an iota is an ordeal I never ever in my professional career want to have to go through.
The danger isn't that gs.com would leak data or somehow someone would get access to GS systems through it - the danger is that someone could change content on gs.com and then use that content to spear-phish or otherwise scam someone else.
You get a link, you check it because you're good like that, see gs.com and then trust it.
...but the economics of such an attack are still very different from when the same thing happened to chase.com or td.com or some other bank that had consumer banking. Because you can send phishing e-mails to the general public and get a pretty good hitrate of people who happen to be in business with that bank and very much used to conducting their business with that bank through that same domain.
For GS.COM that hit-rate would be extremely small, unless you were somehow able to to reach people who are in business with GS with very high accuracy, and then the next problem you'd face is that the trust that these people put in GS.COM doesn't extend to doing any kind of business there. If you're a Norwegian oil company and you have GS handle your USD Forex forward contracts, then the way you do that kind of business is that you call somebody on the phone who works at GS, whose voice you recognize, who has likely invited you to their wedding at some point. You wouldn't expect them to send you an e-mail that put you into a web-flow on GS.COM that somehow ended up with you wiring a few million to an account number they display on-screen.
I'm thinking not attacking GS's customers, but using GS's name to attack others. Imagine, if you will, a crypto scam that puts their scam page on a gs.com address, and makes it appear that Goldman Sachs supports and funds the scam, thereby encouraging people to send in their money.
But the thing is: GS doesn't have a reputation among the general public. 60% of people probably never even heard of them. Another 35% of people have come across them on the news somewhere and probably have negative associations with GS. GS is one of those companies that journalists just love to hate. (GS? Who is GS? ...oh yeah. Now I remember, I saw this thing on the telly. Something about secret world government and stealing food away from the mouths of starving children in Africa for personal gain, I recall). And the other 5% are likely people too smart to fall for bitcoin scams.
[A television is showing Blondie as a news anchor. The inset picture of the news shows the logo of LulzSec, a man wearing a monocle and top hat.]
Blondie: Hackers briefly took down the website of the CIA yesterday...
[Ponytail, sitting in an armchair, is watching a television (seen from the side) standing on a table hearing what Blondie says as indicated with a zigzag line from the TV. Above the top part of the frame is a smaller frame with a label:]
What people hear:
Blondie (not shown from the TV): Someone hacked into the computers of the CIA!!
[Megan, sitting in an armchair, is watching a television (seen from the side) standing on a table hearing what Blondie says as indicated with a zigzag line from the TV. Above the top part of the frame is a smaller frame with a label:]
What computer experts hear:
Blondie (not shown from the TV): Someone tore down a poster hung up by the CIA!!
I bet some of my half baked ideas from 2008 are still running robustly, so long as there's no known vulnerabilities and it's been maintained I don't think the age of software really tells you much about it.
Serious question for fashionistas.. which of the modern frameworks integrate 'properly' with Closure Compiler? Not just minification, but structured so the compilation/code removal/constant replacement/inlining/obfuscation steps work correctly.
I've seen a few with instructions to compile with Closure Compiler, but of the (approximately one) I've checked, the output was barely even minified, never mind optimized
Well I'm sure React does too because Reagent uses it. And since it's a Clojurescript lib, it goes through Closure as part of compilation to JS.
https://github.com/reagent-project/reagent/
They've likely been using them since before they were acquired by Neustar, and therefore they probably have the on-premise edition, while new customers would be pushed towards the cloud version.
2008? Walk through the floors of Fortune 100 investment banks (or other Fortune 100 companies) and you will see systems running REXX jobs on z/OS systems. If you're lucky, the systems will be running AIX. I work at a Fortune 100 company and our migration from on-site IBM to the cloud happened just last year.
Just take a look at the job ads out there for investment banks -
I did an internship at a fortune 100 shipping company and can confirm they were very much still using z/OS and REXX. It's similar to all the COBOL code - it's working and not worth the risk/cost of a migration
This is true of almost ALL businesses, even the technology ones. If it literally isn't them dogfooding their own product, they really have no incentive to do anything if it's not currently halting the business.
You can have fun digging in the source code of emails from various companies, or looking at PDF invoices, etc (check for lines of text at the bottom!).
I prefer FORCOS. FORSIN is for noobs and always out of phase with the latest trends in JS. ;-)
Just before the advent of Gameboy Color, I ported a 50-year-old BWR/PWR nuclear reactor simulator containing 5MLoC from UNIX to Win32.
Roughly the same time, I made for a customer a JRE installer running from a CD-R (it seemed important because it ran slowly) to install a DOS program of a text database of structural building codes. I'm sure no one uses it now... or I really hope.
If this is shocking wait until you find out that a lot of banks still use IBM IMS. That's IBM's first database that they built for the government to track rocket parts for the Apollo moon mission.
Tangential, but I watched a video about maintenance on the Russian high speed train, Сапсан (sapsan), and when they started the train for a test the unmistakable sound of Windows XP starting was heard. It could have been for any number of systems on the train, but I found it funny. Admittedly it was US documentary, and they really butchered the translation for dramatic effect, so I wouldn't be surprised if that wasn't fabricated also.
My uncle’s neighbor retired from a New York bank last year, after ~20 years as the Head of VMS system administration. He managed a team of three whose sole job was to keep a small VAX cluster 100% available during business hours. I don’t recall the specifics but he said the ancient machine processed Billions of dollars per day of transactions via FTP files to some other institution. It was fascinating to know technology and teams like that still exist.
Maybe because 20%+ of the staff were laid off due to the sub-prime issues in 2008, and the poor dude who used to maintain this is a name no one can remember.
Their proprietary securities database (called "SecDB" which actually is the internal name for all the quant analytics. The database itself is actually called a "SecServ") and it's associated programming language (called "Slang") are certainly very strange in many ways, but they do not predate the creation of C. In fact it was all written in C++ and Java and then more recently I think they added a bunch of Scala code.
Source: Was a "strat" (quant developer in the front office at Goldman) for 8 years, including starting in the group that used to run version repo, CI/CD pipeline and internal build and distribution tooling for all of this. Think a code pipeline that builds circa 30m lines of C++, 20-ishm lines of Java code daily on 3 different platforms (Linux, Solaris and Windows) and distributes it globally across thousands of machines every 2 weeks and if mistakes are made it can cost millions of USD. And this predates most of what people think of as "devops" by some years (I was there 2002-2010 ish).
Couple of fun facts to think about.
1)In any given 2 week release cycle (we didn't call them sprints) you would have just a bit shy of 700 individual devs checking in C++ or Java code.
2)The source code repo was CVS
I did the ports from pre-ansi to ansi C++ compilers on all three platforms (in each case with one other dev specific to the platform) and this was in the days when template instantiation compile error messages would be literally 50+ pages long. I got very good with vim and quickfix lists :-)
Maybe this is ridiculous but is there an ironic security upside in that? Systems become so old and increasingly proprietary, only a few people can make ends meat of it so fewer people would try and penetrate them? (I am not an engineer)
It's sort of curious, how exactly do we avoid the "if it ain't broke, don't fix it" mentality for companies that aren't super tech-focused?
Like, sure, maybe a company like Facebook or Google can fairly easily be up-to-date, they have a million engineers running around there, but a lot of companies decidedly aren't tech-focused.
I'm not sure it needs to be "fixed." Certain systems probably can be run for decades without meaningful updates as long as security vulnerabilities are properly addressed. For air-gapped systems you can potentially run with no updates/patches for many, many decades. The main issue is hardware support life-cycle.
The problem is that unless the software is actively worked on and supported by someone, fixing a decades old vulnerability is going to be complicated when it does show up ( not to mention that the main way it can show up if you don't do regular pentesting is by a malicious actor exploiting it, aka too late). The people who set it up are retired, dead, or don't remember; nobody is using that tech anymore so community resources would be of limited use. Unless there's a vendor still alive and actively maintaining your old version in the case of third party software, you're pretty close to screwed. For first party software your only hope is that the people setting it up did a good job with documentation.
It's drastically easier to maintain and patching recent software. You don't need to have everything written in JS and running on Kubernetes for that.
1. It is far easier to fix the old bare bones system that doenst really break than the new shiny all singing dancing system. You dont ever update the code, so nothing new ever breaks.
2. There is a very old guy, who does absolutely fuck all, is grumpy and at the annual review department heads ask why we are paying them 3x anyone else in the department. He is the guy who keeps the legacy system alive.
3. The big project focuses on messaging between the legacy system and the new system, which breaks constantly and is unreliable anyway.
More than that, I personally have seen multiple careers of senior managers, 7/8 figure salaries, destroyed beyond repair over new IT projects. $20mil 2 year project becomes a 5 year $180mil white elephant. You can seed a software company, develop a product and deploy it in these organisations faster and cheaper.
That's fair; upon thinking about it, basically every appliance I have (oven, microwave, etc) has some kind of computer in it, but they have no network access and I will probably have all of them for several decades.
The "if it ain't broke, don't fix it" mind set is only one part of the equation.
Most organizations acknowledge that it's generally preferable to run up to date and modern software. But at the same time, updating or, even worse, replacing a piece of technology can be extremely complex, and also quite risky, so people naturally chose the path of least resistance and keep old pieces software (or hardware) running.
And even Facebook or Google are likely not immune to that.
For example, Google used to run RedHat 7.1 (release in 2001) for a very long time, only transitioning recently to a more modern Debian OS base.
This is of course with huge caveats as Google has the resources and the people to basically maintain their own distribution, and practically did so upgrading this RH 7.1 base with security and bug fix patches, and including newer components such as recent kernels. But overall, the base system Google used was quite old, almost to a scary extent to be honest.
Also, the upgrade process detailed in the slides is really interesting and show how complex and involved it was for Google. For organizations less able to attract very good engineers this kind of upgrade would either be extremely painful (bugs, downtime, etc), or even nearly impossible.
When I was there Goldmans was very tech focused. They’ve built a lot of interesting stuff internally, including SecDb which has been mentioned elsewhere on this thread.
They rarely talk about it externally though, and it’s all built to facilitate their primary businesses.