I'm at an event today titled "Service Oriented Flexible Computing: Promises and Challenges for the Next Generation" being held at Stanford. It's really a great event, held every year in partnership with Accel Partners and the Stanford Networking Research Center.
Considering they have a wireless connection (who doesn't these days) I thought I'd blog it realtime. Too bad they don't have a conference Wiki, they should have talked to Ross
Next Generation Software Platforms: Virtualization, Reliability, and Systems Management
Hector Garcia-Molina, Chairman, Department of Computer Sciences at Stanford
- SOA is extension of the client/server model, there are new types of servers but the client side still just wants to get work done.
- the servers are proliferating, there are "in-servers", "out-servers". In-servers are backoffice and enterprise apps, out-servers are simply servers that are exposed to the broader network.
- regardless of the architecture, you need to understand how the data is structured and who owns it.
- dividing up the data among various machines can be horizontal, vertical, or replicated. These are interesting concepts that I honestly haven't followed much, but it's clear that this is going to be an important factor for SOA apps. I'll try to find information to post, but if you have links please send them.
- challenges remain performance, heterogeneity, privacy and legal. Interestingly, privacy and legal (assuming compliance?) are relatively new challenges we didn't even think of even a few years ago. Considering that SOA apps are transcending the distinction between behind and through the firewall, I would assume that privacy will grow as a challenge for many vendors that never even thought of it before.
- specifically on privacy, the Professor talked about the relative expense of dealing with encrypted data, both in terms of performance and cost.
Diane Greene, founder VMWare
- the basic virtualization trick: interpose a software layer on an interface that is transparent to the software above, and controls the mapping below for error handling, shared resources, and security.
- virtual infrastructure is a dynamic mapping of hardware resources to the business that supports new and legacy apps. (think blades).
- The services that virtual infrastructure includes are: provisioning, data/state mobility, resource policies, security, shared resources.
- customer case example: Qualcomm
* 86% reduction in CPU count: 60 to 8 CPU's
- benefits also include availability. no downtime from hardware changes, lightweight restarting, and integrated data/apps mobility across the resource pool
- consistent management framework makes app mgmt easier by removing hardware idiosyncrasies. Major area of benefit.
Bill Coleman, co-founder BEA (he's the "B"), co-founder Cassatt Corp.
- where we came from: the computer model has been extended over 4 generations, nothing has really been replaced. The last wave set the stage for distributed computing.
- where we are: huge hype period, it will take a decade to happen. The "Whole Product Concept" has not been realized. Current hype on grid and network computing benefits vendors more than customers.
- 2 dislocations are coming, neither has hit inflection point. Grid is the first because it overturns the economics of hardware, but the economics are not currently there because the interconnects are still expensive. Over the next few years the experience will be sufficient to commoditize the expensive interconnects.
- the 2nd dislocation is the software. Loosely coupled web services enable the policy and business process to be disconnected from the underlying apps. This will undermine the economics for the customer, which will force an undermining of the incumbants.
- if this is a commodization business, only the commodity players win. 3 horizontal apps silos, customer side, supply chain, and ERP. These apps will dis-integrate.
- large storage and switch vendors will have to live on leaner margins. Turnover is coming that will reconfigure the industry.
- challenges in the SOA space are significant. It's not simple, it's not excepted as standards, small players need to come together in a platform play, and it's not scaling yet.
- the network needs an operating system that is insensitive to scale, provide virtualization for storage, lan and apps, set business driven policy
Q&A:
Q: are there innovations that are new?
A: Bill: no new fundamental concepts in 30 years, but extension in rampant.
Hector: there's nothing new in Google but it changed the way people search and that's innovative.
Q: how do vendors prevent the death spiral from commoditization
A: Bill: IBM is trying to become a services vendor, adding more stuff into Websphere and trying to make customers think that this stuff is really hard but IBM will save them. IBM is really trying to lock in customers to IBM. As capacity-driven economics take hold, IBM will be forced to drive down the costs of services rather than hardware.
Q: How much will customers trust vendors
A: Diane: vendors have to convince customers through deep technology evaluation.
"Networking and Computing Systems: Smarter and Faster Infrastructure for Flexible Computing"
Krish Ramakrishnan, CEO of Topspin Communications
- programmable server switch clusters
- the missing ingredients include a single switching fabric, central I/O, and central point of control
- the server switch is analagous to the network and storage switches, it looks at the network as aggregation of storage and network capacity
- server switches provide more than just data access, they control servers in another form of virtualization.
- virtualization framework includes a hyperfast interconnect (infiniband), ethernet, fibrechannel, virtualization and boot services (no local disk), policy and provisioning
- utility computing through best of breed servers, storage, apps and mgmt tools.
- this stuff is pretty neat but I wonder what app vendors need to do, if anything, to enable their apps them compatible.
- pretty impressive list of partners, everyone from RLX to Intel.
Craig Partridge, Chief Scientist, BBN
- TCP offload engines: improve network performance by putting the protocols in silicon
- These offload systems don't work because the bottlenecks are not TCP but the operating system and memory subsystems.
- replacing one protocol with another protocol involves the same amount of complexity
- this is a technical discussion of something that doesn't really interest me... I'm gonna check email for a couple of minutes.
Bill Dally, Professor of EE and CS, Stanford
- The Efficiency Gap. Current computing systems have a 100 times gap between the raw materials and the realized gain. The reason is that we are building inefficient systems
- the reason behind this inefficiency is that conventional processors are no longer realizing a 50% performance gain year over year.
- the future performance and cost gains will come from large multiprocessor systems and grids.
- Stream processing changes the ratio of arithmetic to bandwidth by exposing producer-consumer locality. In other words, processing is cheap, bandwidth is expensive.
- stream processors have no reuse, therefore they have no memory requirement.
- streams also expose parallelism, thereby keeping the processors busy
- scalable from mobile devices to supercomputers.
- okay, lot's of hardware technical diagrams now that are way over my head.
- Imagine is a stream processor prototype for signal and image processing.
Andy Bechtolsheim, bio is long and distinguished.
- enterprise data trends include 64 bit computing, horizontal scaling, software service, and network centric.
- grid/cluster is expected to be a $2.5b market in '07. An estimated 33% of all servers in '07 will be deployed in a grid.
- multiple tiers in the datacenters... web/edge servers, app servers, db servers.
- mapping apps to clusters: Amdahl's Law gets to the heart of performance through decomposition for parallelism.
- the best apps for clusters are loosely coupled. I find this interesting because loosely coupled apps depend more on the interconnects? Or is it that the loosely coupled apps are inherently decomposed and thereby parallel by nature... yeah, I think that's it.
- cluster fabric landscape is confusing. Gigabit ethernet, Myrinet and Quadrics, Infiniband, and many more... none of them have reached critical mass in the market. The good news is that they are essentially the same physical layer.
- challenges include security and standardization of RDMA support.
- interesting final note on conclusions: thermal load is increasing, more air conditioning is needed.
------- I'm leaving for the day, I want to digest what I heard and write more about it in the coming days.
This is a very helpful summary of some of the key points of the meeting. It also demonstrates how confusing the understanding of what is happening in grids really is. Was there any meeting of the minds on where we are likely to be going? I'd also be very interested in contacting a few of the speakers, including Bill Coleman and Diane Greene. Do you have emails for them?
Posted by: Bob Cohen | Jul 05, 2004 at 11:49 AM