Bull’s Market for HPC on Demand

By Nicole Hemsoth

May 18, 2011

In the midst of the general excitement at this past year’s Supercomputing Conference in New Orleans, French high performance computing vendor Bull slipped in news about its HPC on demand service, eXtreme Factory. According to Pascal Barbolosi, the head of Extreme Computing at Bull, the on-demand service has taken off, with several million compute hours logged in the platform’s first six months.

Unlike other more general purpose cloud or on-demand services, Bull’s solution is targeted at users with complex modeling and simulation needs. Many of the codes that are preconfigured include those used in manufacturing, film and engineering.

In an interview this week to check in on progress with the company’s HPC service, Barbolosi noted that unlike commercial clouds, their eXtreme Factory is addressing the requirements of HPC customers by providing on-demand remote compute facilities access with a preinstalled and configured environment where ISV applications and open source codes are installed and available.

In his view, public cloud resources designed in a more one-size-fits-all fashion cannot match the requirements of high performance computing user needs. Accordingly, the Bull HPC head explains that his company is opting to “position this HPC on demand service because HPC requirements make it rather different from commercial hyper-marketed clouds.”

Barbolosi told us this week that there were customers running applications on-demand with Bull before the actual launch of the HPC cloud. He pointing to a “well-known automotive manufacturer” that was using a few hundred cores of HPC compute servers via a high performance 100Mbit telecom line earlier in 2010.

He says that as time has progressed and this customer has upgraded, replaced and adapted the number and capabilities of the HPC bullx servers they use they were able to continue along without interruption of their CFD and crash-test applications. He points to this kind of flexibility as attractive to high performance computing customers, noting that the platform can be used in parallel with on-site resources.

Barbolosi identified another early adopter of the eXtreme Factory platform that used the service for a month sometime in 2010 before the official launch. In this case the customer used CD Adaptco’s STAR-CCM+ package with its cloud-friendly, portable ‘power on demand’ licensing mechanism. He said that in this case, depending on the project compute needs, the customer can use the same software and license on her own internal compute resources or on Bull’s. This worked out so well that he says they’ve signed on for fresh resources in 2011.

The eXtreme Factory is, not surprisingly, powered exclusively by their own range of servers. According to Barbolosi, “Most of the infrastructure is comprised of Bullx Bades (both CPU only B500 and mixed CPU/GPU B505) interconnected by an efficient QDR InfiniBand network, running bullx SuperComputer Suite and hosted in our data centers.”

Users access the services via a secure, SSL-certified portal to obtain all the necessary functionality for a complete HPC workflow, including organization, uploading input files and data management, publication of applications, submission and monitoring of jobs, and remote visualization and downloading of results.

As the initial release described, in addition to “many thousands” of Xeon processors the data centers are “equipped with a storage environment, with a distributed file system for maximum performance during the processing stages, as well as permanent storage facilities enabling the user, thanks to remote visualization, to enjoy all the convenience of being a local user while avoiding data transfer as far as possible. “

Outside of defending the obvious choice of their own hardware to tackle the challenge, he explained that their customers would not have been attracted to the service if they were using vanilla servers in a traditional cloud. As he put it, “Traditional clouds don’t offer efficient parallel compute capabilities; vanilla servers don’t offer the throughput that our customers expect.”

On that note, when asked about the way cloud hardware is being positioned as “cloud optimized” (and if they were making that claim) Barbolisi said that as far as Bull is concerned, there is no unique feature of cloud-driven servers that is different from HPC-optimized servers. In other words, as he put it, there is a strong commonality between both domains, including performance, density and low-consumption features.

Barbolosi says he expects there to be a rise in the overall market for cloud computing in the next decade. He says that many HPC usage models are well adapted to cloud as users require elasticity and the ability to easily ‘burst’ workloads. However, he notes, “there are some technical issues specific to HPC that need to be addressed, such as remote visualization of data (instead of transferring huge data sets back and forth) and the ability to flexibly manage resource allocation.”

He says that these roadblocks for HPC clouds have inspired a conservative approach in comparison to proven business computing. He says, “nevertheless we consider that cloud will still be an important part [of the market] and could easily exceed 25% to 30% of HPC spending.”

To close, we can take a step back in time to SC10 for this video interview with Pascal Barbolosi as he introduces Bull’s big news, which includes, among other announcements, the eXtreme Factory.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Debuts Turing Architecture, Focusing on Real-Time Ray Tracing

August 16, 2018

From the SIGGRAPH professional graphics conference in Vancouver this week, Nvidia CEO Jensen Huang unveiled Turing, the company's next-gen GPU platform that introduces new RT Cores to accelerate ray tracing and new Tenso Read more…

By Tiffany Trader

HPC Coding: The Power of L(o)osing Control

August 16, 2018

Exascale roadmaps, exascale projects and exascale lobbyists ask, on-again-off-again, for a fundamental rewrite of major code building blocks. Otherwise, so they claim, codes will not scale up. Naturally, some exascale pr Read more…

By Tobias Weinzierl

STAQ(ing) the Quantum Computing Deck

August 16, 2018

Quantum computers – at least for now – remain noisy. That’s another way of saying unreliable and in diverse ways that often depend on the specific quantum technology used. One idea is to mitigate noisiness and perh Read more…

By John Russell

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

Super Problem Solving

You might think that tackling the world’s toughest problems is a job only for superheroes, but at special places such as the Oak Ridge National Laboratory, supercomputers are the real heroes. Read more…

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with Hewlett Packard Enterprise (HPE) for a new 8-petaflops (peak) supercomputer that will be used to advance early-stage R&a Read more…

By Tiffany Trader

STAQ(ing) the Quantum Computing Deck

August 16, 2018

Quantum computers – at least for now – remain noisy. That’s another way of saying unreliable and in diverse ways that often depend on the specific quantum Read more…

By John Russell

NREL ‘Eagle’ Supercomputer to Advance Energy Tech R&D

August 14, 2018

The U.S. Department of Energy (DOE) National Renewable Energy Laboratory (NREL) has contracted with Hewlett Packard Enterprise (HPE) for a new 8-petaflops (peak Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

SLATE Update: Making Math Libraries Exascale-ready

August 9, 2018

Practically-speaking, achieving exascale computing requires enabling HPC software to effectively use accelerators – mostly GPUs at present – and that remain Read more…

By John Russell

Summertime in Washington: Some Unexpected Advanced Computing News

August 8, 2018

Summertime in Washington DC is known for its heat and humidity. That is why most people get away to either the mountains or the seashore and things slow down. H Read more…

By Alex R. Larzelere

NSF Invests $15 Million in Quantum STAQ

August 7, 2018

Quantum computing development is in full ascent as global backers aim to transcend the limitations of classical computing by leveraging the magical-seeming prop Read more…

By Tiffany Trader

By the Numbers: Cray Would Like Exascale to Be the Icing on the Cake

August 1, 2018

On its earnings call held for investors yesterday, Cray gave an accounting for its latest quarterly financials, offered future guidance and provided an update o Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17


AMD @ SC17


ASRock Rack @ SC17

ASRock Rack



DDN Storage @ SC17

DDN Storage

Huawei @ SC17


IBM @ SC17


IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17


Lenovo @ SC17


Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17


Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17


Tyan @ SC17


Univa @ SC17


  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This