Here is a collection of highlights from this week’s news stream as reported by HPCwire.
Microsoft Revs Windows HPC Server
At the High Performance Computing Financial Markets Conference, Microsoft announced the third release of its Windows HPC operating system, Windows HPC Server 2008 R2.
Bill Hilf, general manager of Microsoft Technical Computing Group, commented on the new release:
“This release of Windows HPC server is a key step in our long-term goal to make the power of technical computing accessible to a broader set of customers, with capabilities across the desktop, servers and the cloud. Customers in all industries can use Windows HPC Server as a foundation for building and running simulations that model the world around us, speeding discovery and helping to make better decisions.”
HPC Windows Server 2008 R2 has some noteworthy features, including cloud bursting ability, which allows users to offload peak computing demand to the cloud. A future upgrade to Windows Server HPC will allow users to provision and manage HPC nodes in Windows Azure from their on-premise system. Another feature allows heavy-duty Excel users to access computational cycles offsite to run complex spreadsheets for a significant time savings. The new HPC Windows variant even allows PCs running Windows 7 to function as a computational grid, much like the big volunteer computing programs do, such as SETI@home.
Microsoft is positioning itself as a competitive alternative to Linux and is aiming to become the operating system of choice for technical computing. The company wants to get behind applications that run the gamut from simulating financial markets to curing disease to designing the next-generation of vehicles.
IDC’s Earl Joseph gives his nod of approval:
“Technical computing presents an enormous opportunity to transform massive amounts of data into powerful insights and solutions. Companies and products, like the new Windows HPC Server 2008 R2, help customers easily take advantage of new technology advances, such as HPC clusters, GPUs, cloud computing and multicore processors. All of these enhancements will help to accelerate the growth of the high-performance computing market.”
Digital Manufacturing Divide Gets Attention
Analyst firm Intersect360 Research and the National Center for Manufacturing Sciences (NCMS) jointly released the results of a survey on Digital Manufacturing in the US. Based on data from that survey, NCMS will present its strategy to bring advanced computing tools to the US manufacturing supply chain during the “Revitalizing Manufacturing: Transforming the Way America Builds” event, to be held September 30.
From the announcement:
For decades, the largest U.S. automotive and aerospace manufacturers have used supercomputing technologies to pursue “Digital Manufacturing” processes. The programs they run allow them to shorten time-to-market, improve product quality, and reduce costs, by designing their products on a computer before they build expensive physical prototypes. With over 300,000 small- and mid-sized manufacturers (SMM) based in the U.S., the study conducted by NCMS and Intersect360 Research probed the reasons why the digital manufacturing concept has not been broadly adopted outside the top echelon. There is a definitive gap between what U.S. manufacturers could be doing that is, what they want to be doing and what they are actually doing, said Addison Snell, CEO of Intersect360 Research. They know where they want to go; they just dont know how to get there.
Barriers to overcoming the digital manufacturing divide include the cost of obtaining the necessary computer hardware and software and a deficit of expertise. However, informed by the survey data and related analysis, NCMS has developed a plan to overcome these barriers that includes leveraging the talent, ideas and facilities within our universities, national labs and industrial research centers, and parlaying that potential into new jobs and a revitalized US manufacturing economy. The NCMS aims to bring these transformative tools to the over 300,000 small- and mid-sized manufacturers in the U.S.
And the Winner Is….10 Gigabit Ethernet
In a week of top-rate conferences and plenty of meaty news, one of our most-read news items concerns interconnects, so it’s only appropriate to give it some space in this weekly wrapup.
Chelsio Communications announced the results of an IBM benchmark study that showed 10 Gigabit Ethernet outperformed InfiniBand and Gigabit Ethernet for a range of HPC applications. Applications were compared using 4x InfiniBand DDR, 10 Gigabit Ethernet using iWARP, and Gigabit Ethernet using TCP/IP. The results indicated that 10Gb Ethernet is superior in performance to InfiniBand in some standard performance benchmark test suites and comparable in others.
From the release:
The test configuration was a computing cluster using a 2.3 GHz quad-core Opteron processor, with each node having 16Gb memory and two quad-core processors. Each node was configured with a 4x DDR InfiniBand connected to a 96-port Cisco DDR switch, using dual-port DDR ConnectX adapters from Mellanox; a 10Gb Ethernet network using Chelsio dual-port adapters with full offload capability, connected to a 20-port Force10 10Gb Ethernet switch; and a Gigabit Ethernet network connected through a Cisco switch. The software configuration was standards-based as well, running RedHat Enterprise Linux Server release 5.2.
IBM ran eight applications over the networks, including NETPERF — a benchmark suite that is commonly used to measure various aspects of networking performance, with a primary focus on bulk data transfer and request/response performance using either TCP/IP or UDP and the Berkeley Socket Interface (BSI). The NETPERF results showed 10Gb Ethernet greatly outperforming InfiniBand for both TCP and UDP, and the Gigabit Ethernet was a distant third.
The full report is available here (PDF).