UberCloud HPC Experiment Readies for Round Three

By Wolfgang Gentzsch and Burak Yenier

March 5, 2013

This is an open invitation to members of our HPC, CAE, life sciences, and big data communities to join us for this third round of the UberCloud Experiment, where we will jointly apply the cloud computing services model to compute and data-intensive workloads on remote cluster computing resources.

To all industry end users, HPC experts, compute resource and software providers: The UberCloud HPC Experiment is making HPC available as a service, for everybody, on demand, at your fingertips, by exploring the end-to-end process of using remote computing resources as a service, and learning how to resolve the many roadblocks.

The HPC Experiment started in July last year with 160 organizations and 25 teams, helping industry end-users to explore access and use of remote computing resources available from HPC centers and from the cloud. Detailed results have been published in the final report and are available upon registration. Then, the second, much improved round of the experiment started last December, with more than 350 organizations and 35 teams as of today and will conclude at the end of March. Now, for round 3 starting April 1, we invite industry end-users, software providers, HPC experts, and resource providers from HPC centers and from the cloud, to join the experiment and collaboratively explore the end-to-end process of remote HPC as a Service, hands on, in 22 well-defined guided steps.

Why Are We Performing this Experiment?

In the US alone, there are over 360,000 small and medium-size manufacturers, many of them using workstations for their daily design and development work, with the need however for more computing, from time to time. Buying an expensive HPC cluster is usually not an option, and renting computing power from HPC centers or cloud service providers still comes with severe roadblocks, such as the complexity of HPC itself, intellectual property and sensitive data, lengthy and expensive data transfers, conservative software licenses, performance bottlenecks from virtualized resources, user-specific system requirements, and missing standards and lack of interoperability among different clouds. Last but not least the currently exploding numbers of different service offerings in the cloud make it difficult for engineering end-users to locate the best-suited solutions or services for their applications’ requirements.

On the other hand, by successively removing these roadblocks, the benefits of using remote computing resources are extremely attractive, for example: no lengthy procurement and acquisition cycles; shifting some budget from CAPEX to OPEX; gaining business flexibility i.e., getting additional resources on demand, from your workstation, when you need them, at your fingertips; and scaling resource usage automatically up and down according to your actual needs.

The Benefits of Participating in the Experiment

The UberCloud HPC Experiment has been designed to drastically reduce many of the barriers mentioned above. By participating in this experiment and moving their engineering or big data application onto a remote computing resource, end-users can expect several real benefits, such as:

  • A vendor independent matching platform for digital manufacturing, computational life sciences, big data, and HPC in the Cloud services.
  • No need to hunt for resources and services in the emerging and more and more crowded Cloud market, by professional match-making of end-users with suitable service providers.
  • Free, on-demand access to hardware, software, and expertise during the experiment.
  • Lowering barriers and risks for frictionless entry into HPC in the Cloud.
  • One stop “shopping” experience for resources and services.
  • Carefully tuned end-to-end, step-by-step process to accessing remote resources.
  • Learning from the best practices of other participants.
  • Gaining hands-on experience with the cloud within your own environment.
  • No-obligation. Risk free proof-of-concept: no money involved, no sensitive data transferred, no software license concerns, and the option to stay anonymous.
  • Leading the way to increasing business agility, competitiveness, and innovation.
  • Crowd sourcing by building relationships with community members, helping each other, and providing valuable feedback to optimize the platform of the experiment.
  • The beaten path of the experiment is guiding the end-user inevitably to success.
  • With participating in this experience the end-user becomes more valuable for their company.
  • Not getting left behind in the emerging world of cloud computing.
  • And finally, free access to the services directory (the interactive UberCloud Exhibit) with a growing number of engineering cloud services.

On the other hand, the list of benefits for service providers (software, resources and expertise) to participate in this experiment is similarly rich. To name a few benefits for service providers: getting immediate constructive feedback from the experiment end-users on how to fine-tune your services; gaining deeper and practical insight into a new market and service-oriented business model; risk-free no failure experimenting allowing you to improve your services during the experiment, on the fly; getting in touch with potential customers; and gaining public attention by becoming part of widely published success stories. Last but not least, all service providers are encouraged to make use of the interactive UberCloud Exhibit to present their services to the wider HPC, CAE, life sciences, and big data communities, in an interactive experience.

Teams of Round 1 and Round 2

A sampling of team names from round 1 of the experiment reflects the wide spectrum of applications: anchor bolt, resonance, radiofrequency, supersonic, liquid gas, wing flow, ship hull, cement flow, sprinkler, space capsule, car acoustics, dosimetry, weathermen, wind turbines, combustion, blood flow, chinaCFD, gas bubbles, side impact, and colombiaBio. Round 2 teams were equally varied with names such as stent simulation, medical devices, photorealistic rendering, ventilation benchmark, roof air inlet, heterogeneous human body, two-phase flow, weather and climate, Hadoop for telecoms, combustion in IC engines, biological diversity, remote viz, acoustic field, electromagnetics, noise vibration, hybrid rocket motor, drifting snow, smoke flow, heat exchanger, gas turbine, bicycle flow, genomic data analysis.

Now We Are Inviting You to Join Round 3

Round 3 will be running from April until the end of June. We are expecting about 400 organizations to form 50 new teams built around industry end-users’ applications running on remote computing resources. In addition to our current focus areas of HPC, CAE, and the life sciences we now also invite big data end users and software, services, and consulting providers to join this experiment, for the reasons that have been outlined in this article. The experiment will be conducted more formally, with more automation, and will be even more user-friendly. The 22-step end-to-end process will be better guided, and the Basecamp ‘team rooms’ for collaboration will be even more comfortable. We will also provide three levels of support: front line (within each team), 2nd level (UberCloud mentors), 3rd level (software & hardware providers), and finally further grow the interactive UberCloud Exhibit services directory.

And Finally, Why Would You Want to Join the Experiment?

In summary, there are many good reasons for joining this experiment for the next three months. Among them: HPC is complex – it is easier to tackle this complexity within our community; the barriers of entry into HPC as a Service through an experiment are low; learning by doing – experimenting without risk – no failure; becoming an active part of this growing community; exploring the end-to-end process of accessing remote computing resources; and learning how this fits into your research or business direction in the near future.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understanding on January 10. The MOU represents the continuation of a 1 Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Tennessee), Satoshi Matsuoka (Tokyo Institute of Technology), Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown and Spectre security updates on the performance of popular H Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension around the potential changes that could affect or disrupt Lustre Read more…

By Carlos Aoki Thomaz

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understandi Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension aroun Read more…

By Carlos Aoki Thomaz

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This