Competition in the Cloud: Yahoo, HP and Intel Join the Search for the Future of Computing Services

By John E. West

July 31, 2008

This week Yahoo, HP and Intel announced their contribution to what is becoming an active competition to develop the infrastructure for next generation computational services. The announcement of the Cloud Computing Test Bed is broader — in scope and scale — than the previous IBM-Google announcement establishing the Cluster Exploratory (CluE), and all this competition is definitely a good thing for the academics who will have several years of work on these resources. If the research efforts bear fruit, they may also help shape the future of HPC as well.

The idea behind these research efforts is that the emerging model of large, generally available compute resources, and the increasing demand for “Internet-scale” applications, will not be satisfied by existing approaches to building infrastructure and applications. Fundamental changes — as yet unknown, hence the need for research — will be necessary in computer science and software engineering from the operating system up to the application.

On Tuesday Yahoo, HP and Intel announced the creation of a new open test bed for “the advancement of cloud computing research and education.” The physical dimensions of the test bed span six centers on three continents, with hardware at the Infocomm Development Authority of Singapore (IDA), the University of Illinois at Urbana-Champaign, the Steinbuch Centre for Computing of the Karlsruhe Institute of Technology, HP Labs, Intel Research and Yahoo. Each center will have HP/Intel hardware — the companies only refer to “infrastructure,” so it’s unclear whether there will be one large cluster at each site or a farm of machines — ranging from 1,000 to 4,000 cores that will be used to support cloud software research.

Representatives from the three companies held a conference call on Tuesday to talk about the test bed. On the call were Prith Banerjee, senior vice president or research at HP and director of HP Labs; Prabhakar Raghavan, head of Yahoo Research; and Andrew Chien, vice president of the corporate technology group at Intel and director of Intel Research. During the call, Banerjee said that the main goal of the test bed is to remove the financial and logistical barriers that might otherwise prevent people from developing effective cloud computing applications.

Raghavan said in a prepared statement that “With this test bed, not only can researchers test applications at Internet scale, they will also have access to the underlying computing systems to advance understanding of how systems software and hardware function in a cloud environment.”

Google and IBM announced a joint venture to provide “hardware, software and services to augment university curricula and expand research horizons” into ways of building applications that can live happily (and provide effective service to users) in a cloud environment. This effort was announced in October of last year with a pledge of 1,600 processors in Google and IBM datacenters. Google and IBM announced at the time that they were partnering with a small number of universities in the U.S., including the University of Washington, Carnegie Mellon, Stanford, MIT and others. In February of this year, the NSF joined the effort in a role that manages the stream of supplicants seeking access to the resources for research, something one would expect the NSF to be very good at.

When asked why the partners in this new test bed didn’t just join up with the Google/IBM effort, Chien said that while the two efforts are complimentary, they are different in important ways, “What we’re trying to do is support research at a variety of levels of the software stack, not only at the application layer, but also down in the system software, in the manageability, and eventually exploiting some of the novel hardware platform features…. my understanding is that the Google/IBM partnership is focused primarily at the application level.” That’s my understanding, too, by the way.

The companies claim that parts of the datacenter infrastructure are up and running now, with more coming online throughout the year. During the discussion about the effort, Raghavan of Yahoo referred several times to the M45 center being “already up and running,” as if that center is to become part of this effort. The precise relationship of that datacenter to this effort was not clearly articulated, however, and we’ll have to wait for further details from the company.

Something else that wasn’t made clear in this announcement is how resources are to be allocated to institutions and individuals that want to build on the platform. It was several months before Google and IBM announced their relationship with NSF that contributes to the “researcher management” aspect of that project, so we may have to wait a while to find out how that’s going to be handled. Chien, in response to a question about whether others would be allowed to join, indicated that this week’s announcement is the “leadership step” and that “we believe that the larger infrastructure we can get together, the more valuable it will be to the research community, so we’re open to expanding this and having other folks join.”

Yahoo of course has a history of this sort of thing, having announced a collaboration with the Indian Tata Group for large-scale cloud computing research to the tune of 14,000 cores, and the M45 datacenter, a 4,000 core machine dedicated to “large-scale systems software research.” Intel, too, has sponsored efforts that are related to the Cloud Computing Test Bed, including the recently announced Universal Parallel Computing Research Centers in partnership with Microsoft.

Of course, not everyone is uniformly enthusiastic about either the Google or Yahoo announcements. Comments on some of the well known Internet news sites (including the Bits blog at the NY Times) point to what they observe is an accelerating pace of change in the acronyms and buzzwords used to describe old problems that never actually get resolved before someone rebrands them and everyone heads off in a “new” direction.

So, what might all this mean for HPC? Again, I turn to the “million monkeys coding” corollary of the better known “infinite monkeys typing” theorem. Sure, software programmers aren’t monkeys, but the more of them we have working on large-scale software problems, the better our chances that we’ll develop a new set of approaches to developing high performance applications.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

House Bill Seeks Study on Quantum Computing, Identifying Benefits, Supply Chain Risks

May 27, 2020

New legislation under consideration (H.R.6919, Advancing Quantum Computing Act) requests that the Secretary of Commerce conduct a comprehensive study on quantum computing to assess the benefits of the technology for Amer Read more…

By Tiffany Trader

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to have bipartisan support, calls for giving NSF $100 billion Read more…

By John Russell

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers in Neuroscience this month present IBM work using a mixed-si Read more…

By John Russell

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even in the U.S. (which has a reasonably fast average broadband Read more…

By Oliver Peckham

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

It is with great sadness that we announce the death of Rich Brueckner. His passing is an unexpected and enormous blow to both his family and our HPC family. Rich was born in Milwaukee, Wisconsin on April 12, 1962. His Read more…

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the dominant primate species, with the neanderthals disappearing b Read more…

By Oliver Peckham

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers Read more…

By John Russell

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia Read more…

By Doug Black

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to re Read more…

By John Russell

AMD Epyc Rome Picked for New Nvidia DGX, but HGX Preserves Intel Option

May 19, 2020

AMD continues to make inroads into the datacenter with its second-generation Epyc "Rome" processor, which last week scored a win with Nvidia's announcement that Read more…

By Tiffany Trader

Hacking Streak Forces European Supercomputers Offline in Midst of COVID-19 Research Effort

May 18, 2020

This week, a number of European supercomputers discovered intrusive malware hosted on their systems. Now, in the midst of a massive supercomputing research effo Read more…

By Oliver Peckham

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This