Airbus Buys Into HPC-as-a-Service

By Michael Feldman

October 18, 2011

High performance computing is getting cheaper every year. But that doesn’t remove the burden of buying these systems on a regular basis when your organization demands ever-increasing computing power to stay competitive. That’s the dilemma a lot of commercial HPC users find themselves in as they wonder how often they should upgrade their HPC machinery. At least one company, Airbus, determined buying HPC systems wasn’t such a great deal after all.

Like all major aircraft manufacturers, Airbus uses high performance computing to support its engineering design work. The company employs it for all its engineering simulation work including wind tunnel aerodynamics, aircraft structure design, composite material design, strength analysis, and acoustic modeling for both the interior of the aircraft and the exterior engine noise. It’s also used in the embedded systems that run the avionics, environmental alert system, and fuel tank and pump calculations. To design these increasingly sophisticated aircraft and go head-to-head against competitors like Boeing requires lots of computational horsepower.

Airbus determined that to keep up they would have to increase their HPC capacity — measured as price for a given number of flops — by a factor of 1.8 every year. The company employs a set of actual engineering codes to benchmark that performance and makes sure that newer HPC systems being considered for deployment fulfill that goal.

The secondary objective was to maximize price-performance. In 2007, after doing a the cost analysis, the Airbus bean counters decided it would make more sense for the company to rent HPC, rather than acquire the systems outright. Up until then, the aircraft manufacturer had bought their own HPC clusters, installed them in Airbus datacenters, and maintained them for the entire lifetime of those systems.

According to Marc Morere, who heads Functional Design IT Architecture & Projects group at Airbus, moving to a rent/lease model meant that the money that would have gone into buying equipment could now be applied to buying more HPC capacity. Or as Morere put it: “We prefer to use the costs for our aircraft program, rather than to negotiate with the bank.”

For HPC infrastructure in particular, they determined that it was better for them to pay in increments, rather than up front. Morere says if they finance HPC systems, they can depreciate the hardware, but those depreciation terms always run five years. Unfortunately, that’s two years longer than Airbus would want to actually operate the hardware. With a company goal of a 1.8-fold increase in HPC capacity each year, the recurring costs after three years became too high to justify keeping the older systems running. “The technology moves too quickly,” says Morere

In 2007, they first looked into a pure HPC on-demand model, where they would just buy compute cycles. But according to Morere, they couldn’t find a satisfactory solution with HP or any other vendor they talked with. The idea then morphed into a service model where HPC systems would be deployed outside of the Airbus datacenters and leased back to company.

The only real downside, when compared to the on-demand model, is that a service entails a flat fee, where you pay the same amount regardless of the available compute capacity consumed. On the flip side, it’s easier for the accountants to budget in a fixed monthly cost than one that could vary through time — based not just on changing computational needs, but also on the volatility of electricity costs and the more variable costs of labor.

In 2007 and 2008, they contracted IBM to host Airbus HPC systems off-site in IBM’s own datacenter. Airbus tapped into the systems remotely for their engineering simulations, but because of the distance between the Airbus research sites and the datacenter, network performance sometimes limited what could be accomplished .

Then in 2009, Airbus inked a deal with HP to install containerized Performance Optimized Datacenters (POD) on-site, but with HP running the infrastructure as a service. Although the PODs were on Airbus property, they didn’t require a datacenter habitat, so the containerized clusters could be set up virtually anywhere there was electricity and water. The HP service contract included all the hardware, system setup, maintenance, operation of software, cooling, UPS, and generators. HP even pays the electric bill. All to this is wrapped up in a monthly service fee they charge to Airbus.

Other bidders on the 2009 contract included IBM, SGI, Bull, and T Systems. Morere says in the end it came down to IBM and HP, with the others being too expensive for the type of all-inclusive service Airbus was interested in. According to Morere, HP was chosen because it had the best technical solution and the best price-performance.

The first phase of the HP contract resulted in the deployment of POD in Toulouse France in 2009. Another POD was added in Hamburg, Germany in 2010. The original Toulouse POD, based on Intel Nehalem CPUs was retired in August 2011.

The Toulouse POD was replaced with two Intel Westmere-based PODs with the latest InfiniBand technology. That system, which currently sits at number 29 on the TOP500 list, went into production in July 2011. It consists of 2,016 HP ProLiant BL280 G6 blade servers, and delivers about 300 teraflops of peak performance. Although all those servers fit into two containers, each 12 meters long, they deliver the equivalent of 1,000 square meters of datacenter HPC.

Because the PODs in Toulouse are on Airbus premises, about 50 meters from the company’s main computer facility, they were able to link the HPC cluster to the machines in the datacenter with four 10GbE links. That kind of direct hookup delivered very low latency as well as plenty of bandwidth.

At this point one might ask, why Airbus even operates its own datacenters anymore? Currently the facilities are being used for application servers, storage, and database work. Some of these in-house systems include HP blades, but at this point,  not PODs. All the pre-processing and post-processing for the HPC work is performed by these datacenter systems. But since these types of applications are not so performance bound, the servers there can operate for five years or longer, and thus take advantage of a standard depreciation cycle.

Whether HPC-as-a-service becomes more widespread remains to be seen. Not every customer feels the need to increase HPC capacity at the rate Airbus does, nor does every company buy enough HPC equipment to make a service contract a viable option. But at least for Airbus, they seem to have found the financial model and the type of system that makes sense for them.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

DoE Awards 24 ASCR Leadership Computing Challenge (ALCC) Projects

June 28, 2017

On Monday, the U.S. Department of Energy’s (DOE’s) ASCR Leadership Computing Challenge (ALCC) program awarded 24 projects a total of 2.1 billion core-hours at the Argonne Leadership Computing Facility (ALCF). The o Read more…

By HPCwire Staff

STEM-Trekker Badisa Mosesane Attends CERN Summer Student Program

June 27, 2017

Badisa Mosesane, an undergraduate scholar who studies computer science at the University of Botswana in Gaborone, recently joined other students from developing nations around the world in Geneva, Switzerland to particip Read more…

By Elizabeth Leake, STEM-Trek

The EU Human Brain Project Reboots but Supercomputing Still Needed

June 26, 2017

The often contentious, EU-funded Human Brain Project whose initial aim was fixed firmly on full-brain simulation is now in the midst of a reboot targeting a more modest goal – development of informatics tools and data/ Read more…

By John Russell

DOE Launches Chicago Quantum Exchange

June 26, 2017

While many of us were preoccupied with ISC 2017 last week, the launch of the Chicago Quantum Exchange went largely unnoticed. So what is such a thing? It is a Department of Energy sponsored collaboration between the Univ Read more…

By John Russell

HPE Extreme Performance Solutions

Creating a Roadmap for HPC Innovation at ISC 2017

In an era where technological advancements are driving innovation to every sector, and powering major economic and scientific breakthroughs, high performance computing (HPC) is crucial to tackle the challenges of today and tomorrow. Read more…

UMass Dartmouth Reports on HPC Day 2017 Activities

June 26, 2017

UMass Dartmouth's Center for Scientific Computing & Visualization Research (CSCVR) organized and hosted the third annual "HPC Day 2017" on May 25th. This annual event showcases on-going scientific research in Massach Read more…

By Gaurav Khanna

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “pre-exascale” award), parsed out additional information ab Read more…

By Tiffany Trader

Tsinghua Crowned Eight-Time Student Cluster Champions at ISC

June 22, 2017

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid whoops and hollers from the crowd, Thomas Sterling presented t Read more…

By Kim McMahon

GPUs, Power9, Figure Prominently in IBM’s Bet on Weather Forecasting

June 22, 2017

IBM jumped into the weather forecasting business roughly a year and a half ago by purchasing The Weather Company. This week at ISC 2017, Big Blue rolled out plans to push deeper into climate science and develop more gran Read more…

By John Russell

DoE Awards 24 ASCR Leadership Computing Challenge (ALCC) Projects

June 28, 2017

On Monday, the U.S. Department of Energy’s (DOE’s) ASCR Leadership Computing Challenge (ALCC) program awarded 24 projects a total of 2.1 billion core-hour Read more…

By HPCwire Staff

DOE Launches Chicago Quantum Exchange

June 26, 2017

While many of us were preoccupied with ISC 2017 last week, the launch of the Chicago Quantum Exchange went largely unnoticed. So what is such a thing? It is a D Read more…

By John Russell

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Tsinghua Crowned Eight-Time Student Cluster Champions at ISC

June 22, 2017

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid wh Read more…

By Kim McMahon

GPUs, Power9, Figure Prominently in IBM’s Bet on Weather Forecasting

June 22, 2017

IBM jumped into the weather forecasting business roughly a year and a half ago by purchasing The Weather Company. This week at ISC 2017, Big Blue rolled out pla Read more…

By John Russell

Intersect 360 at ISC: HPC Industry at $44B by 2021

June 22, 2017

The care, feeding and sustained growth of the HPC industry increasingly is in the hands of the commercial market sector – in particular, it’s the hyperscale Read more…

By Doug Black

At ISC – Goh on Go: Humans Can’t Scale, the Data-Centric Learning Machine Can

June 22, 2017

I've seen the future this week at ISC, it’s on display in prototype or Powerpoint form, and it’s going to dumbfound you. The future is an AI neural network Read more…

By Doug Black

Cray Brings AI and HPC Together on Flagship Supers

June 20, 2017

Cray took one more step toward the convergence of big data and high performance computing (HPC) today when it announced that it’s adding a full suite of big d Read more…

By Alex Woodie

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

Leading Solution Providers

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This