Visit additional Tabor Communication Publications
June 09, 2006
As this issue was being published, Microsoft was getting ready to announce it's first production version of Windows Compute Cluster Server 2003 (CCS). After much anticipation, the company is finally releasing its cluster management offering. The actual announcement is a "release to manufacturing" which means the engineers at Redmond have signed off on the final version. No more betas, no more release candidates. This is it. For evaluation purposes, you should be able to download CCS from the company's website very soon, but you won't be able to purchase it until August.
CCS, designed as an extension of the Windows Server 2003, will provide a cluster management software platform for 64-bit x86 high performance computing systems. It will go up against established Linux-based solutions that now dominate HPC cluster systems worldwide.
The Linux competition will be tough, but Microsoft does have big two things in their favor: (1) Many organizations, both government and commercial, are already comfortably using Windows-based systems, everything from PCs to enterprise servers; and (2) As the largest software provider in the world, the company commands the resources to make things happen.
However, high performance computing is generally unfamiliar territory to Gates and company. Within the last year, they've managed to snag such notable HPC veterans as Tony Hey and Burton Smith, but most of Microsoft's expertise has grown up around PCs and, more recently, enterprise servers.
And because Microsoft came late to the party, rather then setting standards, the company has had to adapt to the existing HPC culture. For example, they incorporated both MPI and the Beowulf cluster model of computing into their product. In addition, they've partnered with some key cluster OEMs (HP, IBM and Dell), interconnect vendors (Myricom, Voltaire, Mellanox, and Silverstorm) and software vendors (Fluent, LSTC and MSC.Software) to make sure CCS is well supported in the HPC ecosystem.
Assuming the product generally works as expected, Microsoft should be able to pick a lot of low-hanging fruit in Windows dominated organizations that don't already have a lot of clusters deployed and are looking to scale out. CCS may also be able to compete against Linux solutions, where the end-users' workstations are Windows-based. In this environment, CCS will be able to provide a more seamless end-to-end solution.
Initially, the CCS may get the most traction in workgroup-sized clusters (less than $50K) or even smaller. In particular, it would seem to be a natural software platform for "personal supercomputers," often defined as systems under $10K. From the user's point of view, the conceptual difference between a personal computer and a personal supercomputer is rather small, so OEMs of these machines may find the Microsoft solution very attractive. In fact, Tyan's Typhoon personal supercomputers have already been demonstrated running under CCS.
To learn more about this new product, I talked to Kyril Faenov, Microsoft's director for High Performance Computing. Our feature article outlines the product's capabilities and provides Faenov's view on how it will fit into the HPC marketplace. The folks at Redmond seem to have done their homework. Here's the money quote from Faenov:
"Ultimately, the folks that we're talking to are focused on getting their work done. They just want to be able to run a core set of applications to get their results. They really don't particularly care what the underlying system is."
It'll be interesting to see how this unfolds over the next year.
Also in This Issue -- a Sterling Interview
If clusters are not your passion, catch the Thomas Sterling interview this week. Sterling, along with Don Becker, developed the original Beowulf cluster computing model, but now confines his interests mainly to high-end supercomputing technologies. Contributing editor, Chris Lazou caught up with him at the NEC User Group meeting last month in Toronto. The resulting interview is one of the best in HPCwire this year.
Lazou and Sterling discussed many interesting topics: silicon technologies, optical computing, federal funding of HPC, DARPA's HPCS program, Petaflop computing and Sterling's own research work. I thought the most interesting exchange was when Lazou brought up the need for new HPC languages and the chicken-and-egg problem of getting software vendors to support them. Here's what Sterling had to say:
"Every language was a new language at one time. Most are not accepted by the mainstream. Some are adopted for niche applications. Very few have a major impact. The thing to remember is that over the next sixty years (the length of all electronic computing to date), we will write many times as much software than already exists today. The true legacy code is our future not our past."
I hope someone at the DARPA HPCS office is listening.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - June 08, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.