IBM Sees Blue Gene Adoption Growing

By By Michael Feldman

August 7, 2008

IBM's Blue Gene technology has received some notable attention lately — especially internationally. Last month, Germany and Japan announced separate deployments of Blue Gene/L systems. Both deployments now represent the highest performing supercomputer systems in their respective countries. Although the timing may have been a coincidence, IBM views these events as part of a growing acceptance of Blue Gene to solve a wider range of high performance computing problems.

Simulating Quarks and Gluons

At Japan's Inter-University Research Institute Corporation High Energy Accelerator Research Organization (KEK), they have deployed ten racks of Blue Gene/L technology, configured into three separate systems, with an aggregate peak performance of 57.3 teraflops.

By breaking the Blue Gene racks up into three systems, KEK identified the three-system configuration as the best approach to perform the types of particle physics simulation calculations they have in mind. The calculations at KEK require changes in parameters for each simulation and they simultaneously run massive number of simulations with those different parameters.

KEK's use of supercomputers has allowed significant achievements in high-energy accelerator science, especially in simulating the dynamics of quarks and gluons, the elementary components of matter. KEK's research into the underlying secrets of nature, including the origin of the universe and matter, requires large-scale numerical simulation and a dramatic increase in computing power.

“As our research in the areas of theoretical high energy physics continues to evolve, the need for computing power is ever greater,” said Ph.D. Shoji Hashimoto, an associate professor, Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization. “IBM's Blue Gene is the ideal system to offer our institute performance levels that will allow areas of scientific discovery that were previously unattainable.”

Scientific Research for Europe

Meanwhile, the German Research Center Juelich, one of the three high-profile German supercomputing centers, inaugurated a newly deployed Blue Gene system as the most powerful supercomputer in Europe.

The system joins an existing 8.9 teraflop supercomputer at Research Center Juelich that is also based on IBM POWER architecture technology.  This dual supercomputer system offers the capacity to fulfill the varying needs of the scientific user community. In addition to the two supercomputers, Juelich's ability to support researchers in methodology, fast algorithms and program efficiency is another important aspect of the center's infrastructure.

The Juelich Blue Gene/L installation will offer a peak performance of 45.8 teraflops and a sustained performance of 36.5 teraflops. Juelich originally had one Blue Gene rack. In this latest deployment they added seven more, connecting them together into an eight-rack system.

The Juelich installation will be used for the most compute intensive research tasks of German and European scientists. Serving as a virtual laboratory, the system will be used for scientific discovery in areas such as particle physics, material sciences, life sciences and environmental research. For example, it will be used to simulate the diffusion of harmful materials in soil and in the atmosphere.

“The request for compute time will go up by a factor of one thousand in the next five years,” predicts Prof. Joachim Treusch, chairman of the board of the Research Center Juelich. “Therefore we will extend our core competency in the area of supercomputing massively in the future.”

“The IBM Blue Gene architecture has proven to be highly attractive to researchers,” said Nurcan Rasig, director of Supercomputing Solutions at IBM in Germany. “The conception of this computer type is especially suitable for capability computing, as high performance and excellent scaling are possible. This is an important feature for getting new scientific results that cannot be reached by conventional HPC clusters.”

The Blue Gene Approach

Herb Schultz, Blue Gene General Manager at IBM, would agree with that assessment. According to him, when you look at the current technology in commodity components, it's just not practical to use them to build multi-hundred teraflop HPC systems.

“There's a physical practical limit to how big systems can get using certain types of technology, says Schultz. “You can't just put 50,000 blades together and make this big system — you don't have the space, you don't have the power, etc.”

He says that the real advantages of the Blue Gene technology is that it has very favorable price/performance and performance/watt characteristics and it was designed to scale extremely well. But to make it commercially viable it had to be built from relatively modestly priced components — PowerPC 440 processors at a conservative 700 MHz clock speed. The relatively low amount of power required for this chip allowed IBM to pack a lot of them close together. Each Blue Gene rack contains 1024 dual-processor nodes.

“It can scale very well,” says Schultz. “We went from 70 teraflops, and six months later doubled it, and then doubled it again. And if someone with enough money wanted to double it again and build a 128-rack system, they could.”

According the Schultz, beyond 128 racks, you're starting to reach the architectural limits of the current technology. But in the future, IBM intends to improve the technology so that you can get a petaflop in roughly the same footprint as the current 64-rack Blue Gene system at Lawrence Livermore National Laboratory (LLNL). IBM's R&D PERCS program, described in last week's issue of HPCwire (http://www.hpcwire.com/hpc/614724.html), is another possible avenue to reach petascale systems. But whether PERCS employs Blue Gene technology or goes in a different direction is still an open question.

Applications Catching Up

But before that happens, today's applications need to take advantage of the current level of Blue Gene technology. Developers are working hard to unleash the performance in current systems and, according to Schultz, this is starting to happen. After more than a year in commercial production, the Blue Gene ecosystem has begun to mature. Specifically, more applications are being ported to the architecture, attracting a wider range of users.

Schultz says it was the middle of last year that people were proving to themselves that the Blue Gene technology was suitable for their applications and would allow their codes to scale. This encouraged customers with one- or two-rack systems to consider scaling up their hardware.

The Blue Gene architecture was initially designed in collaboration with LLNL for their nuclear weapons analysis mission. But over the past year and a half, applications supporting astronomy (radio telescopes), fluid dynamics (large eddy simulations) and biotechnology (genomics) have been successfully ported to Blue Gene.

“Now it's easier to prove the value of Blue Gene with something other than standard benchmarks,” says Schultz. “We're getting some real code ported and run. The scaling is very good. The performance is good. So I think people are starting to see that Blue Gene is now ready for a variety of applications besides the ones it was originally designed for — high energy physics codes for national laboratories.”

Blue Genes on Wall Street?

There's also been some interest in running financial codes (Monte Carlo techniques and options pricing models) on Blue Gene. In fact, on April 24th, IBM will be pitching this idea to the financial community, at the Linux on Wall Street conference (http://www.linuxonwallstreet.com). The company will present a keynote address at the conference that describes how Blue Gene technology can be applied to financial applications. IBM would like to to encourage some involvement from the Wall Street community so that more financial codes can be ported to the architecture.

Schultz says that when Blue Gene was conceived, IBM had no thoughts that the technology would be hosting financial applications. But a lot of these customers are grappling with very compute intense applications that need to be run in data centers with limited amounts of space, power, and cooling. Blade servers are the traditional solution, but Blue Gene systems can deliver more computing performance, using less power and space. And, according to Schultz, from an end user's point of view, Blue Gene looks like a Linux cluster.

Getting Up To Speed

One thing that has stimulated Blue Gene application porting is IBM's own Deep Computing Capacity On Demand (DCCOD) Center.  This has enabled users to get access to the technology without having to purchase a system — an expensive proposition, since a single Blue Gene rack costs over a million dollars!

The DCCOD center allows users to borrow Blue Gene cycles on machines owned and maintained by IBM. It has provided an avenue to the technology for two important groups: (1) ISVs, so that they can research and develop key applications for Blue Gene and (2) End users, who can trial and scale their custom applications on the technology before committing to a system purchase. Schultz believes the DCCOD Center has been invaluble in enabling developers to get experience with Blue Gene technology.

“This was one of our original challenges,” says Schultz. When you have a big system like this and the smallest thing you [sell] is 1024 nodes, how can you get people access to it? The On Demand Center gives people that access.”

Apart from the DCCOD, some users can also get access to government-owned Blue Gene systems, where the owners have been mandated to loan out some of their cycles.

An Elite Market

No matter how many applications end up on Blue Gene, it will never be a general-purpose high performance computer. Nor was it intended to be. Blue Gene inhabits the rarified atmosphere of the HPC capability market, defined by IDC as supercomputers costing over $1 million. This market has had flat or declining revenues for several years.

“I have no reason to believe that the trajectory is going to change,” says Schultz. “But I do think that Blue Gene's share of that space will get bigger.”

Schultz admits that trying to position something like Blue Gene in the marketplace is always a challenge. Why would someone want to buy this? You can talk about its scalability, its number one performance ranking, its power/cooling efficiency, etc. But those are just attributes. According to Schultz, the awareness that's starting to emerge is that it solves problems that couldn't be solved before.

“There are a lot of customers out there with really big problems that are just waiting for the solution to come along,” says Schultz. “That's our customer base — those who really want to move ahead advancing their science.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire