Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Extreme Scale Computing (irvingwb.com)
16 points by dlnovell on April 18, 2010 | hide | past | favorite | 3 comments


A lot of random thoughts came to mind when the author referenced mainframes.

My father and I argue/discuss this sort of thing often. Mainly because we're from two different worlds. He comes from a Mainframe (z/TPF) background and usually laughs at what we think is high-performance or a heavy load on our web servers. (He also likes to ask when I'm going to learn a real programming language like assembler.)

Just last night I was asking him why web companies don't run companies of just one of these juggernauts mainframes. The price of the hardware seems to be the biggest hurdle. But there's also the fact that if you want 'feature x' with how you store data... well you write it yourself so that it runs as fast as possible. You won't find VISA running distributed MySQL instances anytime soon. The mainframe mindset seems to be that you build the system EXACTLY how you need it. No More, No Less! The way I think of it is like Lego blocks.

Today we take a companies like Twitter who think "We have a lot of different kinds of blocks, how can we make them fit together and do what we need."

Mainframe developers think "We need to handle tens of thousands of transactions per second doing X. So lets make our own blocks that fit together perfectly and do exactly that."

This means you end up having to re-invent a lot of things. But you re-invent exactly what you need and leave out the rest. I wonder if the future is in a hybrid of these two types of systems. Perhaps we need to put Mainframes in the Cloud and let people purchase time on them like Windows Azure & AWS. I would certainly like to play with that kind of horsepower. Or perhaps the future is in something that doesn't exist yet which the author seems to suggest.


What you are suggesting was called Time-sharing (1960s). http://en.wikipedia.org/wiki/Time-sharing


Supercomputing is a moving target. Irving Wladawsky-Berger has given a nice summary of how supercomputers are scaling (we are beginning to see petaflop machines) and what the challenges for the scale up to exascale computing will be and the significant changes to predictive modeling that they will enable.

In case you have not noticed, supercomputing (at least at the teraflop level) has become mainstream. We are beginning to see monolithic supercomputers, that is, machines primarily dedicated to a particular application or sub-application, in a variety of production situations. Often these monolithic supercomputers are constructed of standard commodity servers; some use hardware accelerators to substantially improve throughput or real time performance by exploiting unrealized parallelism. Specialization by an accelerator can improve throughput by significant factors, often an order of magnitude or two.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: