Bull’s Market for HPC on Demand

By Nicole Hemsoth

May 18, 2011

In the midst of the general excitement at this past year’s Supercomputing Conference in New Orleans, French high performance computing vendor Bull slipped in news about its HPC on demand service, eXtreme Factory. According to Pascal Barbolosi, the head of Extreme Computing at Bull, the on-demand service has taken off, with several million compute hours logged in the platform’s first six months.

Unlike other more general purpose cloud or on-demand services, Bull’s solution is targeted at users with complex modeling and simulation needs. Many of the codes that are preconfigured include those used in manufacturing, film and engineering.

In an interview this week to check in on progress with the company’s HPC service, Barbolosi noted that unlike commercial clouds, their eXtreme Factory is addressing the requirements of HPC customers by providing on-demand remote compute facilities access with a preinstalled and configured environment where ISV applications and open source codes are installed and available.

In his view, public cloud resources designed in a more one-size-fits-all fashion cannot match the requirements of high performance computing user needs. Accordingly, the Bull HPC head explains that his company is opting to “position this HPC on demand service because HPC requirements make it rather different from commercial hyper-marketed clouds.”

Barbolosi told us this week that there were customers running applications on-demand with Bull before the actual launch of the HPC cloud. He pointing to a “well-known automotive manufacturer” that was using a few hundred cores of HPC compute servers via a high performance 100Mbit telecom line earlier in 2010.

He says that as time has progressed and this customer has upgraded, replaced and adapted the number and capabilities of the HPC bullx servers they use they were able to continue along without interruption of their CFD and crash-test applications. He points to this kind of flexibility as attractive to high performance computing customers, noting that the platform can be used in parallel with on-site resources.

Barbolosi identified another early adopter of the eXtreme Factory platform that used the service for a month sometime in 2010 before the official launch. In this case the customer used CD Adaptco’s STAR-CCM+ package with its cloud-friendly, portable ‘power on demand’ licensing mechanism. He said that in this case, depending on the project compute needs, the customer can use the same software and license on her own internal compute resources or on Bull’s. This worked out so well that he says they’ve signed on for fresh resources in 2011.

The eXtreme Factory is, not surprisingly, powered exclusively by their own range of servers. According to Barbolosi, “Most of the infrastructure is comprised of Bullx Bades (both CPU only B500 and mixed CPU/GPU B505) interconnected by an efficient QDR InfiniBand network, running bullx SuperComputer Suite and hosted in our data centers.”

Users access the services via a secure, SSL-certified portal to obtain all the necessary functionality for a complete HPC workflow, including organization, uploading input files and data management, publication of applications, submission and monitoring of jobs, and remote visualization and downloading of results.

As the initial release described, in addition to “many thousands” of Xeon processors the data centers are “equipped with a storage environment, with a distributed file system for maximum performance during the processing stages, as well as permanent storage facilities enabling the user, thanks to remote visualization, to enjoy all the convenience of being a local user while avoiding data transfer as far as possible. “

Outside of defending the obvious choice of their own hardware to tackle the challenge, he explained that their customers would not have been attracted to the service if they were using vanilla servers in a traditional cloud. As he put it, “Traditional clouds don’t offer efficient parallel compute capabilities; vanilla servers don’t offer the throughput that our customers expect.”

On that note, when asked about the way cloud hardware is being positioned as “cloud optimized” (and if they were making that claim) Barbolisi said that as far as Bull is concerned, there is no unique feature of cloud-driven servers that is different from HPC-optimized servers. In other words, as he put it, there is a strong commonality between both domains, including performance, density and low-consumption features.

Barbolosi says he expects there to be a rise in the overall market for cloud computing in the next decade. He says that many HPC usage models are well adapted to cloud as users require elasticity and the ability to easily ‘burst’ workloads. However, he notes, “there are some technical issues specific to HPC that need to be addressed, such as remote visualization of data (instead of transferring huge data sets back and forth) and the ability to flexibly manage resource allocation.”

He says that these roadblocks for HPC clouds have inspired a conservative approach in comparison to proven business computing. He says, “nevertheless we consider that cloud will still be an important part [of the market] and could easily exceed 25% to 30% of HPC spending.”

To close, we can take a step back in time to SC10 for this video interview with Pascal Barbolosi as he introduces Bull’s big news, which includes, among other announcements, the eXtreme Factory.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Inspur Establishes Artificial Intelligence (AI) Department

Google Showcases 2017 AI Research Highlights

January 23, 2018

Looking for a good snapshot of the state of AI research? Cloud giant Google recently reviewed its 2017 AI research and application highlights in a two-part blog. While hardly comprehensive, it’s a worthwhile, fast read Read more…

By John Russell

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understanding on January 10. The MOU represents the continuation of a 1 Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Tennessee), Satoshi Matsuoka (Tokyo Institute of Technology), Read more…

By John Russell

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown and Spectre security updates on the performance of popular H Read more…

By Tiffany Trader

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understandi Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension aroun Read more…

By Carlos Aoki Thomaz

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Leading Solution Providers

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This