my comments are in bold
"if you had to plow a field what would you prefer, 2 strong oxen or 2,000 chickens"
The difficult thing is not the hardware, it's how to program it. Parallel systems have been the big thing for years, but how we program them is still an exercise in highly specialized applications.
this is a big thing because multicore processors will depend a fundamental break in how we develop applications for them.
Moore's Law is still alive at least through 2015, but Moore's Law is often misquoted to refer to performance when in fact it refers to transistor count (which is important to cost because of how chips are manufactured - on round wafers).
Intel could have made the Pentium with three 386 cores... but nobody would have known how to program it. Despite the fact that this is a rational evolutionary path, none of the processor companies are doing this because of the programming challenge.
Sequential processors have the added challenge that light isn't going to get any faster at 3.4ghz light can travel across the die in one clock cycle. I'm not exactly sure what the implication with this is, but it's a cool stat at any rate.
In the client-server world concurrency is already solved on the server side because many copies of the same code execute simultaneously. On the client side it is atypical to have multiple copies of code run. What's interesting about this is that it would imply that the scheduling of processes at the server level is more sophisticated than the client, even though in many cases they may use the same operating system. But I think what that points to is the scheduling of processes against multiple servers which is different than the scheduling of OS processes across multiple (hundreds?) of processor cores.
Next question is to the above point, isn't it easier to think about this kind of programming as programming to a server pool. Sutter agrees that we are moving to a "fabric computing" model but argues that most developers can't develop apps for a cloud so the learning curve is still there. No short term relief.
Sutter covers some of the challenges with our current generation of tools, which are hard to debug because of the abstraction of processes. There is also a challenge to proving quality. This leads to the next question comparing the human brains evolution to computers, but as Sutter points out we don't know how the brain works and we can't build a system that replicates it, so while the point is well taken it doesn't go to solving the concurrency problem we face today.
Next comment points out that server centric computing is positioned well as a home computing model. Sutter says he doesn't buy it just because all the evidence is to the contrary, despite the WWW. Even in the Web, the client executes more cycles than the server, so it's a mistake to conflate the web with server-centric computing. And the fact is that desktop productivity apps are not delivered as web services from a server, and even if they were there are more points of failure to consider, so in effect it proves the point of the concurrency problem.
There some follow on discussion that is over my head, but basically underscores the point that concurrency is a complicated set of problems to solve.
[You write: Sequential processors have the added challenge that light isn't going to get any faster at 3.4ghz light can travel across the die in one clock cycle. I'm not exactly sure what the implication with this is, but it's a cool stat at any rate.]
The significance of this is that if you increase the clock speed any more then your signal cannot get to the far reaches of the chip (e.g. to trigger some logic action) before the next signal is generated. The only way to solve this problem is to shrink the die so that you can get your signal where it needs to go before the next clock cycle; other solutions are to make the operation take two clock cycles, which is not good for speed, or to use async logic, which is a hard and radical solution to the problem.
To put this into a cute biz metaphor you might understand, imagine that we wanted to make the various options pits at exchanges do more work. We can stuff more people in, but eventually you will run into problems with crowding making the abililty to transmit the message difficult (leakage) and you can only grow this mass of traders to the point where a shouted voice can be heard on the other side...
p.s. How annoying that in addition to shrinking the text input area to the size of a postage stamp, Typepad has also eliminated html markup in comments that would make it easier to show quoted text. Are these people actively trying to destroy the usefulness of their product?
Posted by: Jim McCoy | Jul 12, 2005 at 09:58 AM
the other implication is that the designer needs to take great care in how they lay out the components.
But I more wrote about not understanding the implication from the standpoint that nobody is doing sequential processor development right now, all the major processor companies are doing multi-core.
Posted by: jeff | Jul 12, 2005 at 10:53 AM