Visit additional Tabor Communication Publications
September 20, 2010
New solution will help spur customer insights and innovation
NEW YORK, Sept. 20 -- Today during the keynote address at the High Performance Computing Financial Markets Conference, Microsoft Corp. announced the immediate availability of Windows HPC Server 2008 R2. Providing a comprehensive and integrated high-performance computing (HPC) solution at a low cost of ownership, this server offers new capabilities for powerful analysis and is ready for the toughest technical computing workloads in business, academia and government.
"This release of Windows HPC server is a key step in our long-term goal to make the power of technical computing accessible to a broader set of customers, with capabilities across the desktop, servers and the cloud," said Bill Hilf, general manager, Microsoft Technical Computing Group. "Customers in all industries can use Windows HPC Server as a foundation for building and running simulations that model the world around us, speeding discovery and helping to make better decisions."
Windows HPC Server 2008 R2 Technical Advancements
Customers rely on Windows HPC Server clusters to run a wide variety of mission-critical applications, from simulating financial markets to fighting disease to building next-generation vehicles. Their feedback has driven important advancements in Windows HPC Server 2008 R2.
Extending HPC to the Cloud
The cloud is a key pillar of Microsoft's Technical Computing initiative. At the High Performance Computing Financial Markets Conference, Microsoft demonstrated how customers will be able to burst HPC workloads from their on-premises datacenters to the cloud for elastic, just-in-time processing power. In the near future, the company will release an update to Windows HPC Server that allows customers to provision and manage HPC nodes in Windows Azure from within on-premises server clusters.
Parallel Development Simplified
At the conference, Microsoft also highlighted another tenet of its Technical Computing initiative: Simplifying the development of HPC applications for the new generation of distributed, or parallel, computing resources on client systems, server clusters and in the cloud. With Windows HPC Server 2008 R2, Visual Studio 2010, and partners such as Intel Corporation and NVIDIA Corp., Microsoft provides an integrated parallel computing platform upon which developers can efficiently design, test and optimize parallel code for deployment on client, cluster or cloud computing resources.
"Technical computing presents an enormous opportunity to transform massive amounts of data into powerful insights and solutions," said Earl Joseph at IDC. "Companies and products, like the new Windows HPC Server 2008 R2, help customers easily take advantage of new technology advances, such as HPC clusters, GPUs, cloud computing and multicore processors. All of these enhancements will help to accelerate the growth of the high-performance computing market."
Founded in 1975, Microsoft (Nasdaq "MSFT") is the worldwide leader in software, services and solutions that help people and businesses realize their full potential.
1 For more information, see "Choosing between Windows and Linux for High Performance Computing" white paper (PDF).
2 Based on an HPC deployment scenario of 250 compute nodes and 1,000 desktop nodes; for more information, see "Evaluating the Lifecycle Costs of High Performance Computing Solutions: Windows HPC Server and Linux-based Solutions" white paper (PDF).
Source: Microsoft Corp.
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.