Uncovering Results in the Magellan Testbed

By Nicole Hemsoth

June 22, 2010

While it’s getting easier to find case studies of cloud deployments in the enterprise, cloud deployments in the scientific computing arena are a bit more nebulous to track with some exceptions. Accordingly, those in the scientific computing community who are looking for news about cloud computing are paying close attention to the Magellan testbed, which is set to deliver some results that will be of value as researchers tackle the tough question of buying time versus investing in their own private clusters.

The Magellan cloud computing project is delivering some interesting results as it continues to alter the cloud environment in order to take different cloud models to task. The National Energy Research Scientific Computing Centers (NERSC) in conjunction with Argonne National Laboratory launched a computational cloud testbed called Magellan in October, 2009 with funds from the American Recovery and Reinvestment Act via the U.S. Department of Energy. The goal of the joint effort is to look at the cost and energy benefits and drawbacks to the cloud computing paradigm for scientists, specifically those working on government-funded projects. The range of application areas that are either already being explored or are set to enter the cloud covers several scientific computing arenas, including genomics and climate research and applied mathematics.

To evaluate the current progress and challenges for Magellan users and NERSC, HPC in the Cloud discussed the status of the project with NERSC Director, Kathy Yelick.

HPCc: As a testbed, what variables will be added or subtracted on a hardware and application level to test for differences in performance and benchmarking?

Yelick: The actual testbed is static in the hardware sense; we’ve installed a cluster that’s the hardware basis for the cloud testbed and that’s IBM iDataPlex System with an InfiniBand network and Nehelem processors, which is really the high end of cloud when you compare it to what you’d see in a commercial setting. From a software perspective we’ll be doing a lot of experimentation with different types of virtualization and different uses of operating system virtualization. We’ll also be looking at some of the programming models that are available in cloud computing, including the Hadoop implementation of the map reduce program model idea and we’ll also be looking at different configurations of the system to provide people with either the idea of virtual clusters or more of a shared resource environment where the boundaries between the jobs are more dynamic.

HPCc: For now it seems that there is a distinct focus on genomics research given your collaboration with the Joint Genome Institute (JGI) but from your initial releases about the aims of Magellan it appeared that there would be a broader focus. What other scientific areas will be you examining?

Yelick: We are actually looking at a broader DOE science focus, but just it turned out there was immediate need for some work in the genomics area so we set up a virtual cluster within our cloud testbed for the Joint Genome Institute, which was a short-term project. They’re still using that virtual cloud but it will be trailing off in the next few months as they install some of their own hardware. After that we’ll be looking at some other science projects in high energy physics, applied mathematics, and some climate data analysis. There’s a project in the Earth Systems Grid that’s looking at climate data; it’s not a climate modeling simulation platform, it’s more of a data analysis platform. So in addition to the compute tests we also bought close to a petabyte of disk storage to create a storage cloud that’s integrated into our file system.

HPCc: What are some of the results you’ve seen thus far in terms of your goals of examining overall energy-efficiency and cost effectiveness and what goals or comparison points do you have to determine overall efficiency?

Yelick: We selected an energy-efficient system, IBM iDataPlex Linux Cluster with liquid cooled doors, which is very energy-efficient and did some novel things in the installation of that system to pack it into a tighter space and make more effective use of the cooling system. We’re actually using water that’s returned from other computers in the system that go into the cooling system, which allows us to save energy.

It’s hard to do an energy efficiency comparison to Amazon for example because they don’t open their configurations to the public but we’re always looking for ways to make it more energy efficient. Our real comparison point is not to the commercial clouds, but to private clusters that individual researchers go out and purchase for their scientific applications. So what we’re exploring is the question of whether the DOE or other government agencies should be buying their own clusters (they’ll go out and buy a rack or even a system of 64 nodes, for instance to run their own scientific applications on) or whether those kinds of purchases should be done in a more consolidated way. In other words, we’re looking at the efficiency of running private separate clusters that are run by individual researchers throughout the lab and university system compared to a setup like what we have at NERSC, which is a consolidated testbed.

HPCc: In one of your releases about the use of Amazon EC2 for the metagenomics project, it seemed that there were some pricing surprises that you didn’t anticipate, which might mean that there are some unexpected underlying issues in public clouds for scientific users?

Yelick: It is true that what we found is that there are some costs in the commercial clouds that are not as obvious when you just look at the pricing models. Those costs can also include applications running more slowly on a shared environment with a relatively low-speed network (Ethernet networks rather than our InfiniBand network). Scientific applications that are using more than one processor per job can run much more slowly in an environment like that because of the network performance, which we think is the most significant factor, but also perhaps because of the sharing of the system and the virtualization. If you just look at a price per CPU-hour, you need to be careful because you really want to look at the application performance as well.

The other big factor is the storage, which you pay for separately. For instance, in climate modeling in our experience, there’s a lot of data storage and manipulation that goes along with this application; it’s not just a computationally-intensive problem. You really have to look at both the cost of the storage but also the type of bandwidth you get from say a storage cloud into a compute cloud if you’re doing data analysis. Those are some of the places where I think that some of the commercial options are not really configured for high-parallel I/O bandwidth between the storage and compute clouds. Moving massive scientific datasets around is not really what these are optimized for. Then again, we’re really looking at a different workload than those in the commercial setting—that’s one of the things we were trying to understand—to what extent can the systems be identical and in what ways do they need to be configured differently for a scientific environment.

HPCc: One of the big issues for scientific users is application performance—what have you noticed in this area; in other words, which scientific applications seem best-suited for the cloud? What have developers noticed?

Yelick: BLAST is one example of an application we’ve looked at on a number of systems. I think in terms of the application development there are advantages and disadvantages of the cloud model and what I’m referring to here is really the virtualization model. The advantage is that it is really flexible because you can choose an OS version you’re going to install and run but then again, as an application developer you’re also responsible for doing that in some sense in a hardware as a service model—you get the raw hardware but then you need to configure your environment with your operating system environment, your libraries, and so on. For an application like BLAST, which is an application that has a tremendous amount of throughput required and runs many jobs per day throughout the year, the time to configure a system like that for a cloud environment makes a lot of sense. With that done we’ve been able to run some of the metagenomics pipelines on this cloud environment. There are positives and negatives—some of the users, I was talking to someone looking at detector data in a physics application area—in that case they wanted the control you get from a cloud environment, that is, they wanted the ability to run a particular version of the operating system so they could go back and run versions of the application that had been developed several years ago in order to do validation against current versios of the code. That’s a big attraction.

I should also mention that the Hadoop programming model is something we have used a lot on the Berkeley campus, we are just in the process of being able to provide that to the users on the NERSC testbed. There’s been some work in the case of BLAST on top of Hadoop.

HPCc: Although this it is still early to get an overall picture of the suitability of cloud for scientific computing, what are some of the more surprising findings thus far, especially as they relate to any expectations or benchmarks you might have had in mind before the launch of Magellan?

Yelick: I think the biggest is the difference in performance is visible even when running fairly modest-sized applications across different cloud environments; especially when looking at pricing models—what can seem like a very attractive environment and can actually be very effective at something like the BLAST workload, which is basically independent serial jobs in large numbers, that are running on that kind of environment—each one s running independently, that works very well in a lot of different environments both commercial and on our in-house cluster and would probably also work well on a lower-cost cluster.

But looking at some of the other kinds of scientific applications, even at fairly modest job sizes, there are significant performance differences between running a batch schedule system where jobs run in a synchronous manner across a sub-cluster and on a higher-speed network and in an environment that is not designed for that kind of synchronous parallel work, which is what you get in a commercial cloud.

The other thing has been the sociological question of what it is that the scientists find attractive about cloud computing. This is less quantitative, but having talked to various scientific groups about what makes cloud more attractive than what we already have for scientists, I would say the first issue is really the control of the time—the primary reason why they go out and buy their own hardware in the first palce. It’s really a scheduling issue. It’s about how heavily utilized a system is.

The effect would be a system that’s not as well utilized as our other systems at NERSC (which are around 95% utilized) and if you give a sub-cluster of 64 nodes to a science group, most likely they’re not going to run that full on throughout the year. So there is an interesting question about the utilization of systems, which then goes back to energy efficiency. You need to look at the work that gets done per kilowatt of energy that’s used as opposed to looking at it as how efficient the computer system is.

Another thing is that there is some interest in the virtualization for some of these groups that want to run particular OS versions because, for example they’re running large international project and there are particular software version requirements—the map reduce model is also interesting to some, often who have fairly independent kinds of serial work they want to perform and I think some of the data analysis problems such as genomics will fit in that category as well. Other data analysis problems, including detector data (coming out of CERN for instance or the Earth Systems Grid) those are massively serial jobs, and there are a lot in the data analysis area, those will be the best examples for the cloud environment but that depends on us having an architecture that provides the high-speed I/O between the storage and compute part of the cloud.

HPCc: To expand on that, do you think it’s still too early for large-scale scientific computing in the cloud? Better yet, do you think it’s too early for HPC in the cloud?

Yelick: I would rephrase that ask how useful is cloud to scientific computation–and I think there’s a part of the workload in scientific computing that’s well-suited to the cloud, but it’s not the HPC end, it’s really the bulk aggregate serial workload that often comes up in scientific computing that is not really the traditional arena of high-performance computing. If you look at some of the commercial offerings like SGI’s cloud cluster they’re certainly providing a system and environment that will be competitive for HPC and then it will come down to cost issues—how to most cost effectively run systems and the question of the service level and what the scientists are willing to go pay for versus want to have.

There are lots of questions about anything other than the large serial work for a cloud environment —the biggest sticking point in the cloud is the integration of the network (having a high-speed network that allows you to run parallel work and along with that being able to schedule parallel jobs in a batch way so they can do frequent synchronization across the parallel job.

This can be overcome; we look at cloud as business model. It’s not about HPC and clouds it’s about the individual private clusters that people are still buying versus cloud. It’s really about trying to figure out if you can get rid of the private clusters and replace that with a cloud environment.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

US Exascale Computing Update with Paul Messina

December 8, 2016

Around the world, efforts are ramping up to cross the next major computing threshold with machines that are 50-100x more performant than today’s fastest number crunchers.  Read more…

By Tiffany Trader

Weekly Twitter Roundup (Dec. 8, 2016)

December 8, 2016

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

Qualcomm Targets Intel Datacenter Dominance with 10nm ARM-based Server Chip

December 8, 2016

Claiming no less than a reshaping of the future of Intel-dominated datacenter computing, Qualcomm Technologies, the market leader in smartphone chips, announced the forthcoming availability of what it says is the world’s first 10nm processor for servers, based on ARM Holding’s chip designs. Read more…

By Doug Black

Which Schools Produce the Top Coders in the World?

December 8, 2016

Ever wonder which universities worldwide produce the best coders? The answers may surprise you, at least as judged by the results of a competition posted yesterday on the HackerRank blog. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines. Read more…

By John Russell

DDN Enables 50TB/Day Trans-Pacific Data Transfer for Yahoo Japan

December 6, 2016

Transferring data from one data center to another in search of lower regional energy costs isn’t a new concept, but Yahoo Japan is putting the idea into transcontinental effect with a system that transfers 50TB of data a day from Japan to the U.S., where electricity costs a quarter of the rates in Japan. Read more…

By Doug Black

Infographic Highlights Career of Admiral Grace Murray Hopper

December 5, 2016

Dr. Grace Murray Hopper (December 9, 1906 – January 1, 1992) was an early pioneer of computer science and one of the most famous women achievers in a field dominated by men. Read more…

By Staff

Ganthier, Turkel on the Dell EMC Road Ahead

December 5, 2016

Who is Dell EMC and why should you care? Glad you asked is Jim Ganthier’s quick response. Ganthier is SVP for validated solutions and high performance computing for the new (even bigger) technology giant Dell EMC following Dell’s acquisition of EMC in September. In this case, says Ganthier, the blending of the two companies is a 1+1 = 5 proposition. Not bad math if you can pull it off. Read more…

By John Russell

US Exascale Computing Update with Paul Messina

December 8, 2016

Around the world, efforts are ramping up to cross the next major computing threshold with machines that are 50-100x more performant than today’s fastest number crunchers.  Read more…

By Tiffany Trader

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines. Read more…

By John Russell

Ganthier, Turkel on the Dell EMC Road Ahead

December 5, 2016

Who is Dell EMC and why should you care? Glad you asked is Jim Ganthier’s quick response. Ganthier is SVP for validated solutions and high performance computing for the new (even bigger) technology giant Dell EMC following Dell’s acquisition of EMC in September. In this case, says Ganthier, the blending of the two companies is a 1+1 = 5 proposition. Not bad math if you can pull it off. Read more…

By John Russell

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

HPE-SGI to Tackle Exascale and Enterprise Targets

November 22, 2016

At first blush, and maybe second blush too, Hewlett Packard Enterprise’s (HPE) purchase of SGI seems like an unambiguous win-win. SGI’s advanced shared memory technology, its popular UV product line (Hanna), deep vertical market expertise, and services-led go-to-market capability all give HPE a leg up in its drive to remake itself. Bear in mind HPE came into existence just a year ago with the split of Hewlett-Packard. The computer landscape, including HPC, is shifting with still unclear consequences. One wonders who’s next on the deal block following Dell’s recent merger with EMC. Read more…

By John Russell

Why 2016 Is the Most Important Year in HPC in Over Two Decades

August 23, 2016

In 1994, two NASA employees connected 16 commodity workstations together using a standard Ethernet LAN and installed open-source message passing software that allowed their number-crunching scientific application to run on the whole “cluster” of machines as if it were a single entity. Read more…

By Vincent Natoli, Stone Ridge Technology

IBM Advances Against x86 with Power9

August 30, 2016

After offering OpenPower Summit attendees a limited preview in April, IBM is unveiling further details of its next-gen CPU, Power9, which the tech mainstay is counting on to regain market share ceded to rival Intel. Read more…

By Tiffany Trader

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

Think Fast – Is Neuromorphic Computing Set to Leap Forward?

August 15, 2016

Steadily advancing neuromorphic computing technology has created high expectations for this fundamentally different approach to computing. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

ARM Unveils Scalable Vector Extension for HPC at Hot Chips

August 22, 2016

ARM and Fujitsu today announced a scalable vector extension (SVE) to the ARMv8-A architecture intended to enhance ARM capabilities in HPC workloads. Fujitsu is the lead silicon partner in the effort (so far) and will use ARM with SVE technology in its post K computer, Japan’s next flagship supercomputer planned for the 2020 timeframe. This is an important incremental step for ARM, which seeks to push more aggressively into mainstream and HPC server markets. Read more…

By John Russell

IBM Debuts Power8 Chip with NVLink and Three New Systems

September 8, 2016

Not long after revealing more details about its next-gen Power9 chip due in 2017, IBM today rolled out three new Power8-based Linux servers and a new version of its Power8 chip featuring Nvidia’s NVLink interconnect. Read more…

By John Russell

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

Leading Solution Providers

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Intel Launches Silicon Photonics Chip, Previews Next-Gen Phi for AI

August 18, 2016

At the Intel Developer Forum, held in San Francisco this week, Intel Senior Vice President and General Manager Diane Bryant announced the launch of Intel's Silicon Photonics product line and teased a brand-new Phi product, codenamed "Knights Mill," aimed at machine learning workloads. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Micron, Intel Prepare to Launch 3D XPoint Memory

August 16, 2016

Micron Technology used last week’s Flash Memory Summit to roll out its new line of 3D XPoint memory technology jointly developed with Intel while demonstrating the technology in solid-state drives. Micron claimed its Quantx line delivers PCI Express (PCIe) SSD performance with read latencies at less than 10 microseconds and writes at less than 20 microseconds. Read more…

By George Leopold

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This