IBM Sees Blue Gene Adoption Growing

By By Michael Feldman

August 7, 2008

IBM's Blue Gene technology has received some notable attention lately — especially internationally. Last month, Germany and Japan announced separate deployments of Blue Gene/L systems. Both deployments now represent the highest performing supercomputer systems in their respective countries. Although the timing may have been a coincidence, IBM views these events as part of a growing acceptance of Blue Gene to solve a wider range of high performance computing problems.

Simulating Quarks and Gluons

At Japan's Inter-University Research Institute Corporation High Energy Accelerator Research Organization (KEK), they have deployed ten racks of Blue Gene/L technology, configured into three separate systems, with an aggregate peak performance of 57.3 teraflops.

By breaking the Blue Gene racks up into three systems, KEK identified the three-system configuration as the best approach to perform the types of particle physics simulation calculations they have in mind. The calculations at KEK require changes in parameters for each simulation and they simultaneously run massive number of simulations with those different parameters.

KEK's use of supercomputers has allowed significant achievements in high-energy accelerator science, especially in simulating the dynamics of quarks and gluons, the elementary components of matter. KEK's research into the underlying secrets of nature, including the origin of the universe and matter, requires large-scale numerical simulation and a dramatic increase in computing power.

“As our research in the areas of theoretical high energy physics continues to evolve, the need for computing power is ever greater,” said Ph.D. Shoji Hashimoto, an associate professor, Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization. “IBM's Blue Gene is the ideal system to offer our institute performance levels that will allow areas of scientific discovery that were previously unattainable.”

Scientific Research for Europe

Meanwhile, the German Research Center Juelich, one of the three high-profile German supercomputing centers, inaugurated a newly deployed Blue Gene system as the most powerful supercomputer in Europe.

The system joins an existing 8.9 teraflop supercomputer at Research Center Juelich that is also based on IBM POWER architecture technology.  This dual supercomputer system offers the capacity to fulfill the varying needs of the scientific user community. In addition to the two supercomputers, Juelich's ability to support researchers in methodology, fast algorithms and program efficiency is another important aspect of the center's infrastructure.

The Juelich Blue Gene/L installation will offer a peak performance of 45.8 teraflops and a sustained performance of 36.5 teraflops. Juelich originally had one Blue Gene rack. In this latest deployment they added seven more, connecting them together into an eight-rack system.

The Juelich installation will be used for the most compute intensive research tasks of German and European scientists. Serving as a virtual laboratory, the system will be used for scientific discovery in areas such as particle physics, material sciences, life sciences and environmental research. For example, it will be used to simulate the diffusion of harmful materials in soil and in the atmosphere.

“The request for compute time will go up by a factor of one thousand in the next five years,” predicts Prof. Joachim Treusch, chairman of the board of the Research Center Juelich. “Therefore we will extend our core competency in the area of supercomputing massively in the future.”

“The IBM Blue Gene architecture has proven to be highly attractive to researchers,” said Nurcan Rasig, director of Supercomputing Solutions at IBM in Germany. “The conception of this computer type is especially suitable for capability computing, as high performance and excellent scaling are possible. This is an important feature for getting new scientific results that cannot be reached by conventional HPC clusters.”

The Blue Gene Approach

Herb Schultz, Blue Gene General Manager at IBM, would agree with that assessment. According to him, when you look at the current technology in commodity components, it's just not practical to use them to build multi-hundred teraflop HPC systems.

“There's a physical practical limit to how big systems can get using certain types of technology, says Schultz. “You can't just put 50,000 blades together and make this big system — you don't have the space, you don't have the power, etc.”

He says that the real advantages of the Blue Gene technology is that it has very favorable price/performance and performance/watt characteristics and it was designed to scale extremely well. But to make it commercially viable it had to be built from relatively modestly priced components — PowerPC 440 processors at a conservative 700 MHz clock speed. The relatively low amount of power required for this chip allowed IBM to pack a lot of them close together. Each Blue Gene rack contains 1024 dual-processor nodes.

“It can scale very well,” says Schultz. “We went from 70 teraflops, and six months later doubled it, and then doubled it again. And if someone with enough money wanted to double it again and build a 128-rack system, they could.”

According the Schultz, beyond 128 racks, you're starting to reach the architectural limits of the current technology. But in the future, IBM intends to improve the technology so that you can get a petaflop in roughly the same footprint as the current 64-rack Blue Gene system at Lawrence Livermore National Laboratory (LLNL). IBM's R&D PERCS program, described in last week's issue of HPCwire (http://www.hpcwire.com/hpc/614724.html), is another possible avenue to reach petascale systems. But whether PERCS employs Blue Gene technology or goes in a different direction is still an open question.

Applications Catching Up

But before that happens, today's applications need to take advantage of the current level of Blue Gene technology. Developers are working hard to unleash the performance in current systems and, according to Schultz, this is starting to happen. After more than a year in commercial production, the Blue Gene ecosystem has begun to mature. Specifically, more applications are being ported to the architecture, attracting a wider range of users.

Schultz says it was the middle of last year that people were proving to themselves that the Blue Gene technology was suitable for their applications and would allow their codes to scale. This encouraged customers with one- or two-rack systems to consider scaling up their hardware.

The Blue Gene architecture was initially designed in collaboration with LLNL for their nuclear weapons analysis mission. But over the past year and a half, applications supporting astronomy (radio telescopes), fluid dynamics (large eddy simulations) and biotechnology (genomics) have been successfully ported to Blue Gene.

“Now it's easier to prove the value of Blue Gene with something other than standard benchmarks,” says Schultz. “We're getting some real code ported and run. The scaling is very good. The performance is good. So I think people are starting to see that Blue Gene is now ready for a variety of applications besides the ones it was originally designed for — high energy physics codes for national laboratories.”

Blue Genes on Wall Street?

There's also been some interest in running financial codes (Monte Carlo techniques and options pricing models) on Blue Gene. In fact, on April 24th, IBM will be pitching this idea to the financial community, at the Linux on Wall Street conference (http://www.linuxonwallstreet.com). The company will present a keynote address at the conference that describes how Blue Gene technology can be applied to financial applications. IBM would like to to encourage some involvement from the Wall Street community so that more financial codes can be ported to the architecture.

Schultz says that when Blue Gene was conceived, IBM had no thoughts that the technology would be hosting financial applications. But a lot of these customers are grappling with very compute intense applications that need to be run in data centers with limited amounts of space, power, and cooling. Blade servers are the traditional solution, but Blue Gene systems can deliver more computing performance, using less power and space. And, according to Schultz, from an end user's point of view, Blue Gene looks like a Linux cluster.

Getting Up To Speed

One thing that has stimulated Blue Gene application porting is IBM's own Deep Computing Capacity On Demand (DCCOD) Center.  This has enabled users to get access to the technology without having to purchase a system — an expensive proposition, since a single Blue Gene rack costs over a million dollars!

The DCCOD center allows users to borrow Blue Gene cycles on machines owned and maintained by IBM. It has provided an avenue to the technology for two important groups: (1) ISVs, so that they can research and develop key applications for Blue Gene and (2) End users, who can trial and scale their custom applications on the technology before committing to a system purchase. Schultz believes the DCCOD Center has been invaluble in enabling developers to get experience with Blue Gene technology.

“This was one of our original challenges,” says Schultz. When you have a big system like this and the smallest thing you [sell] is 1024 nodes, how can you get people access to it? The On Demand Center gives people that access.”

Apart from the DCCOD, some users can also get access to government-owned Blue Gene systems, where the owners have been mandated to loan out some of their cycles.

An Elite Market

No matter how many applications end up on Blue Gene, it will never be a general-purpose high performance computer. Nor was it intended to be. Blue Gene inhabits the rarified atmosphere of the HPC capability market, defined by IDC as supercomputers costing over $1 million. This market has had flat or declining revenues for several years.

“I have no reason to believe that the trajectory is going to change,” says Schultz. “But I do think that Blue Gene's share of that space will get bigger.”

Schultz admits that trying to position something like Blue Gene in the marketplace is always a challenge. Why would someone want to buy this? You can talk about its scalability, its number one performance ranking, its power/cooling efficiency, etc. But those are just attributes. According to Schultz, the awareness that's starting to emerge is that it solves problems that couldn't be solved before.

“There are a lot of customers out there with really big problems that are just waiting for the solution to come along,” says Schultz. “That's our customer base — those who really want to move ahead advancing their science.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire