Visit additional Tabor Communication Publications
November 23, 2007
John Powers, the CEO of Digipede Technologies, penned a thoughtful post on his blog last week about some of the negative reaction he was hearing about Microsoft at the recent Supecomputing conference (SC07). Apparently, some people were claiming that nobody was using Windows for high performance computing and even to suggest so meant that you were probably on Microsoft's payroll. Wow. I guess the the anti-Gates contingent is alive and well and living amongst us in the HPC community.
"It is amazing to me the level of religious ferver that Microsoft still inspires. The bashers out there can be perfectly calm and reasonable about a wide range of topics -- but say the word "Microsoft," and they turn bright red and irrational. I have watched this phenomenon for years, and still find it inexplicable. Microsoft is a company. That company makes software. Some of their software is very, very good. Some of it is remarkably bad. I don't understand why some people find it so hard to remain objective (or even civil) when discussing their products and market presence."
How surprising can this really be though? I think we have to come to grips with the fact that technologists are humans first, and rational people second. As humans, we like to indulge in our personal dogma, like for example, Windows is evil, open source software will save computing, or the Internet is free. Each has an element of truth in it, but none are true. In the computing world, people rabidly defend their favorite programming language, operating system, chip architecture, IT company, etc., because they formed emotional attachments to those things early on. It makes for a simpler world view, but it obscures reality.
Then there's the fact that Microsoft has made a gazillion dollars selling its software, which, by itself, creates a certain amount of fear and loathing. Even Google, the golden child of the Internet Age, attracts hostility for being so good at what it does. The reality is that the market ruthlessly rewards the winners, and discards the losers. He who is without a bottom line can cast the first stone.
On the other hand, Microsoft probably deserves some degree of resentment. The original version of Windows and some of the company's other early software were pretty bad by today's standards. And, in all honesty, the company has engaged in monopolistic practices from time to time. If you need any more reasons to be suspicious of Microsoft, you don't need to venture very far in cyberspace. There's even a Wikipedia entry devoted to criticisms of the company.
In the HPC realm, the real point of contention is between open source Linux and Microsoft's Windows HPC offerings. Support for open source software runs deep in this community, since it has been one of the principal drivers behind bringing high performance computing to the masses. Open source Linux is now the de facto OS on HPC platforms, from workstations to supercomputers.
But Linux isn't free. If it was, Red Hat and Novell wouldn't have a business, and Linux programmers wouldn't get paid. The OS is however, malleable. Linux allows companies like Cray to make an ultra-lightweight supercomputing version for the XT5, and cluster vendors like SiCortex to make another version for their customized architecture. The open sourceness of Linux enables a large community of developers to contribute to a common source repository for the betterment of everyone.
Enter Microsoft's Windows Compute Cluster Server (CCS). While not open source, it offers a more complete software platform beyond just the OS environment. An MPI library, a job scheduler, a remote installation service, cluster administration tools and various drivers are all included. The recently announced sequel to CCS, Windows HPC Server 2008 (currently in beta and expected for release in the second half of 2008), will add new high-speed networking, upgraded cluster management tools, failover capabilities, an SOA-style job scheduler, and support for third-party clustered file systems. Since all this comes from a single company, integration and compatibility are guaranteed
On the application side, Microsoft Excel now includes multi-threaded support for spreadsheet calculations, as well as cluster support on the server side. Microsoft is also in the process of adding parallel programming extensions to its popular .NET Framework.
Microsoft's initial HPC strategy is to leverage the vast Windows ecosystem and provide non-Linux users an avenue into high performance computing. That's not such a ridiculous proposition. Microsoft has no need to directly confront the Linux crowd right now. There are plenty of Windows-only departments and other groups within organizations that would benefit from HPC, but are unwilling to go the Linux route. The Windows HPC solution offers a path of least resistence.
And for now at least, Gates and company is playing nice with the Linuxians, offering support for dual-boot setups so that a cluster can be brought up in either environment. The real battle will occur if the Windows HPC server gets some real traction in the market and starts to challenge Linux as an equal.
As for the anti-Microsoft ideologues: take a breath. Ideals are fine, but there is little to be gained by worshipping them. It's certainly no substitute for critical thinking. In the last six years, everyone on the planet has seen the result of the fundamentalist approach. Let's do without the idealogy for awhile and see what happens.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - November 22, 2007 @ 9:00 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.