Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Faster Chips Are Leaving Programmers in Their Dust (nytimes.com)
10 points by pixcavator on Dec 17, 2007 | hide | past | favorite | 26 comments


Ok, so maybe I'm missing this whole multi-core dilemma. But I don't see that big of a problem. I just think that people are asking the wrong questions. The question isn't how can I write software for mutli-core chips, it should be "what problems are solved best with multiple cores?"

There are categories of "embarrassingly parallel" problems that have been solved for years using multiple cores: video rendering, 3D graphics rendering, etc... In short, anything that does repetitive processing on large datasets.

Now, people are upset, because we're not going to be able to improve the software for the average user with multiple cores, and they are correct if they want word processing and email to get better with multiple cores. The question that we need to be asking, is this: how can embarrassingly parallel problems make user software better?

How can we use multiple streams of video in software? How can 3D rendering improve my software? Or, what sort of very large datasets can I process with multiple cores on the desktop?

The companies that answer those questions (e.g GOOG) are companies that will make a lot of money in the next decade.


I agree.

At least half of the "dilemma" is pure, unadulterated marketing. In this article, the marketer is Microsoft, which is trying to convince end users that there's some kind of big problem, one which no mere mortal can comprehend, that is somehow preventing our expensive new Vista machines from being any better than the XP boxes they replaced. It certainly has nothing to do with Microsoft's incompetence, nor with their slavish devotion to Hollywood-approved, mind-mangling, box-breaking DRM. And it's certainly not their insistence on foisting incompatible proprietary crap like IE on the industry. No, it must be a Fundamental Problem of Computer Science that is holding us back. Naturally, this problem can only be solved by the big academic brains that work for... Microsoft!

The email example is a dead giveaway... it's laugh-out-loud funny:

"In the future, Mr. Mundie said, parallel software will take on tasks that make the computer increasingly act as an intelligent personal assistant."

Ah, the intelligent personal assistant -- it's the application of the future, and it always will be.

How do we know that intelligent email processing is not being held up by the lack of suitable coding techniques for eight-way parallel processors? Because I have a dual-core processor right now, and it spends the night contemplating its digital navel and counting to 2^64 by fives for fun. If there was something smart it could be doing with my email, why isn't it working on it right now? I am drowning in unused processor cycles.


+1 Insightful

The Microsoft personal assistant example is just cracking me up. Firstly, Microsoft has been selling the whole 'automated assistant' thing to us for YEARS. Clippy is just one hideous example.

Secondly, automated inbox processing has been around for _years_. My POPFile open source program is now over 5 years old and there are older examples than that (I was doing automated emailing sorting in the late 90s and others before me). So multi-core machines are what's holding this back? What a joke.

Thirdly, it mentions features (looking at who I correspond with) that are already available (see Xobni and others). And automatic response systems are also around to deal with customer service.

Sorry, for the rant put two pages of fluff about multi-core processors with some freak out speculation about email processing that's been available for a long time.

How about talking about something interesting, like parallel aware languages (Occam, erlang, ...)?


The Vista DRM complaints are ridiculous. For one, it's ironic that everyone I know who bitches about it owns an iPod, which is basically the reason we're in this DRM quagmire to begin with. Nobody ever complains about Apple's DRM policies, and if anyone, they're the ones who should be telling the studios "we won't sell your products until you ditch DRM". They have game-changing power there, MS does not, as they're still an also-ran in the media sales industry.

For another, I've never had a problem with any DRM on Vista because I just don't have DRM'ed files, and like 99.9% of America, don't have an HD-DVD or Bluray drive. It's certainly not box-breaking by any sane definition.

And lastly, but most importantly, Microsoft (and any other consumer OS maker) has no choice. See http://arstechnica.com/articles/paedia/hardware/hdcp-vista.a... for details. We may all hate DRM, but it isn't Microsoft who is at fault. Not that any of that is at all relevant to the NYT article.


Nice article, but what a strange title. Faster chips are leaving programmers in their dust? Huh? I wasn't aware that I was competing with cpu speed. All this time, I thought more processing speed expanded my possibilites. After all, my entire field (mathematical optimization) would be pretty useless without lots and lots of processing power.

I talked to a dude with an OR background, and he said that in the 80s people laughed at the notion of using this kind of math to solve business problems. Processing power increases the opportunities for programmers, without question.

Anyway, fun to read, just a very strange take on the relationship between programmers and cpu speed.


The point is that more processor speed expands your possibilities only if you are smart enough to actually use it. Lazy (dare I say "blub") programmers are being left in the dust.

(Of course, this isn't the first time some programmers have resisted change. I remember a great rant about how if you program in assembly and never hit in the cache, the Pentium 4 was no faster than the Pentium III and thus useless. That guy probably quit programming entirely when multicore arrived.)


Well have you written an automated email your-life-understander and automatic-reply-drafter to use people's 2nd CPU?

Well there you go, left behind.


Maybe those extra cores can be used to do a better job of remembering, learning, and anticipating what users will do. This could pre-load or unload applications or components based on historical usage, rearrange toolbars to have the most commonly used commands, and in general make the kinds of improvements to computing in general that the squiggly underlines did for spell checking last decade. Right now computers are largely deterministic and users have to learn how the computer works. That's why users hate computers. Some of that pain could be taken away. I think this will benefit OSes, thick clients, and installable programs more than the web.


Magic toolbar rearrangement is a world of pain. That's like having my SO notice that I always look for my keys before I go out the door so she moves them to the table near the front door, when I habitually put them in the vide poche near by desk.


I'm not saying that's what I would want, but have you ever watched a non-techie use the computer? Just looked over their shoulder and watched them? It's horrifying! I can only watch my wife for about 2 minutes before I start belting out things like "You don't need to double click links on the web!" or "If you double click on an app, you have to give it a couple seconds to load or else you're making the problem worse!"

What a smart OS would do was watch you for a while and notice what you do, and then make a decision based on the statistics: let's say 85% of the time, your keys are in the vide poche and you find them immediately, and the other 15% of the time they're somewhere else and it takes you 5 minutes to find them. Then it would warp them to the right place (the place your actions showed them was right) with a note and an option to "always do this". I shudder to think of MSFT's implementation of this (Clippy), but if a smart company along the lines of Humanized did it, and used statistics instead of some clever algorithm (didn't we just read about Google doing that?), it could be a godsend for average users.


Toolbar rearrangement was a bad example. Try this: I wish like nothing other that Windows Mobile 5 would realize that when I press Text Messages with no new messages, go straight to "Compose Message" and rotate the screen to landscape, or if there are new messages, go to the oldest new message. I've done that for 98% of the times I've opened Text Messages, and I'd imagine most usage of most apps by most users fits into a few similar buckets.

Don't underestimate the value of saving every user 5 seconds each time they use your app.


"Indeed, a leading computer scientist has warned that an easy solution to programming chips with dozens of processors has not yet been discovered."

weird I thought it was called erlang


erlang is not easy. At least, not for me, a mere mortal.


I bet that's just because you're not used to it. Neither am I, but I get a strong feeling that the concepts that Erlang lets you program in terms of are extremely powerful for writing efficient, correct code for multi-CPU systems.


ok let me elaborate - it is "easier" to do parallel programming on erlang compared to other languages


I know this. I watched a video on this whole argument not long ago...link is floating around on YC somewheres.


well then why complain?


PR.


I don't know why u got modded down - this is a submarine PR article for MS...


How do I tell if it was modded down?

Maybe I should explain my terse comment. But first, I agree that it's a submarine PR piece for Microsoft. How many software companies are cited in that article? How many technologies are mentioned and/or explored? In short it says "Multi-core processors have potential, and [only] Microsoft has the best minds trying to exploit that potential."

Parellelism has been available in less local forms for decades. Supercomputers, clusters, more recently grid computing. These aren't new problems for computer science, they are only new problems for desktop software developers that have always assumed a simpler processor architecture.

So imagine a program that could identify parallelizable sequences of instructions in other programs, and split the program accordingly. Such a program may be able to exist, but it appears to be a very hard problem (it's even a hard problem for humans). Vista is definitely NOT this program.


"How do I tell if it was modded down?"

your 1st comment had a -1 point on it


The article seems to be a retread of the same thing that's been printed every day for 20 years. "New processors are going to make all these new things possible." And yet there's still no computer holding a reasonably intelligent conversation with you if you call Dell customer support. Actually, there isn't even a human doing that.


surely if MS could make it so threads were processed by spare cores automattically then this would make life easier.

Servers could use the cores to process more connections and lessen the need for more servers. I guess virtualization could be used.


"...we at least have a nucleus of people who kind of know what's possible and what isn't..."

A sure fire recipe for missing the boat.

Honestly, which would you rather have, more power in your hand or a faster pipe to the rest of the world?


If just a small portion of an application is non-parallelizable, then a million cores won't help you.


I think it's time for a refresher on Amdahl's Law.

http://en.wikipedia.org/wiki/Amdahl's_law

The non-parallel portion limits the available speedup.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: