Visit additional Tabor Communication Publications
May 13, 2008
BladeCenter QS22 delivers supercomputing power for everything from financial trading to oil-field discovery
ARMONK, N.Y., MAy 13 -- Driven by growing commercial need in areas such as financial services, digital media creation and medical imaging, IBM today expanded its High Performance Computing (HPC) capabilities for businesses with the introduction of the IBM BladeCenter QS22 -- a new, economical supercomputing technology inspired by advanced scientific research facilities.
The heart of the QS22 is a new processor compliant with the Cell Broadband Engine (Cell/B.E.) Architecture, originally developed by IBM, Sony and Toshiba to provide the computing power for cutting-edge gaming applications. And for the most challenging arithmetic operations, this new processor, the IBM PowerXCell 8i, offers five-times the speed of the original Cell/B.E. processor.
Coupled with additions like 16-times more memory (up to 32 GB) than its predecessors, the QS22 can handle workloads that previously required dozens of servers. As an example, for a physician, that could mean finding and diagnosing a tumor in seconds instead of hours.
IBM has built a strong ecosystem around the new QS22 to address critical real-time analytic and imaging projects, with over 20 IBM business partners to enable key solutions for the Cell/B.E. technology and 35 universities to provide in-depth curriculum and resources. In total, these investments create an environment where HPC innovations can easily be introduced into the market, and a wider spectrum of businesses can take advantage of its unique capabilities and potential. Already, more than 50 customers worldwide are moving significant workloads to the QS22. For example:
The New Enterprise Datacenter
According to technology analyst firm Gartner, more than 70 percent of Global 1000 companies will need to dramatically change their datacenters in the next five years -- as they are running out of power and space, while managing skyrocketing energy and cooling costs. In response, IBM is helping clients develop a new enterprise datacenter, which offers dramatic improvements in IT efficiency and provides for rapid deployment of new IT services to support future business growth. IBM is helping clients move to new enterprise datacenters by focusing on best practice around virtualization, highly efficient IT, service management and cloud computing.
The QS22 was designed from the ground-up as a key element of this new enterprise datacenter initiative. For development, the QS22 boasts an open environment, utilizing the flexibility of Red Hat Enterprise Linux as the primary operating system and the open development environment of Eclipse. For energy efficiency, it increases the performance-per-watt and better manages power draw from the overall server chassis from previous generations, thanks to some key built-in features:
In addition, IBM has made available thousands of pages of technical documentation on the Cell/B.E. Architecture to the public, including a free, full-system simulator. IBM has released an upgrade to its Software Development Kit (SDK) for Multicore Acceleration v3, providing enhancements and templates to enable clients to utilize the new features of QS22.
The new QS22 complements all other IBM BladeCenter offerings, such as those based on Intel Xeon, AMD Opteron and IBM Power processors. IBM BladeCenter is the broadest portfolio of blade offerings in the industry, designed to address a range of client environments like those for small and medium business, telecommunications and -- with the QS22 in particular -- high performance computing. By utilizing the PowerXCell 8i processor, the QS22 also allows IT managers to evaluate how much of an application would need the supercomputing power of the Cell/B.E. Architecture and how much could remain on a traditional system, providing the full range of options in the midst of other system priorities.
"The QS22 is a technological leap over the physical limitations of traditional processors that often dampen the ability of organizations to reach their goals," said Jim Comfort, vice president, IBM Systems & Technology Group. "IBM has delivered on the promise of integrating HPC into the business world in a way that allows developers, clients and IT departments to ramp up quickly and get results without delay."
All in the HPC Family
The QS22 is part of a robust family of HPC products at IBM, all designed to deliver a holistic approach to computing that involves designing and delivering the fastest, highly efficient, easily accessible technical solutions to clients. From the Cell Broadband Engine Architecture-based BladeCenter QS22 to the Blue Gene supercomputer, to products like IBM's Power Systems, industry standard clusters and high-performance System Storage, the HPC family at IBM is the result of a rich history of discovery and award-winning innovation.
The QS22 will be available in early June, while the SDK for Multicore Acceleration v3 is available now.
For more information on the BladeCenter QS22, visit http://www-03.ibm.com/systems/info/bladecenter/qs22/index.html.
For more information on the Cell Broadband Engine, visit http://www.ibm.com/technology/cell.
For more information on IBM's High Performance Computing portfolio, visit http://www.ibm.com/deepcomputing.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.