Cloud Computing, Virtualization 2.0 Among NGDC Highlights

By Derrick Harris

August 11, 2008

Did anyone actually think a conference called Next Generation Data Center (NGDC) would come and go without addressing “the cloud?” In 2008 — a year destined to go down in the IT annals as the “Year of the Cloud” — that’s not even a possibility. However, cloud computing wasn’t the only topic discussed at the show, and even when it was the paradigm du jour, its presentation ranged from “this is what it is” to “this is how it looks” to “this is how we’re using it — today.” (And I didn’t even attend all of the sessions dedicated to cloud computing.)

The whole NGDC/LinuxWorld show (held last week in San Francisco) kicked off with a keynote by Merrill Lynch Chief Technology Architect Jeffrey Birnbaum, who outlined the investment bank’s move to “stateless computing.” Actually, he explained, it’s not so much about being stateless as it is about where the state is. Merrill Lynch is moving from a dedicated server network to a shared server network, functioning essentially as a cloud that allows Merrill Lynch to provision capacity rather than machines.

Aside from architectural change, Birnbaum says another key element of Merrill Lynch’s stateless infrastructure is its enterprise file system, which he believes really should be called an “application deployment system.” A namespace environment like the Web, all the components needed for an application to run are referenceable through the file system, thus negating the need for heavy-duty software stacks and golden images. The file system works via a combination of push and pull, or of replication and caching, said Birnbaum. The strategy also works for virtual desktops, he said, with all applications — including the operating system — being stream to the thin client.

But keeping things lightweight and flexible is only part of the challenge; workload management also is important. Birnbaum says widespread virtualization is a key to this type of infrastructure, but some applications can’t handle performance overhead imposed by running in a virtual environment. For these types of applications, a stateless computing platform needs the ability to host applications either physically or virtually. Additionally, says Birnbaum, everything has to be policy-based so primary applications get their resources when they need them. On the workload management front, Merrill Lynch is working with Evergrid, Platform Computing and SoftModule.

For the folks concerned about capital expenditure, the best part about Merrill Lynch’s stateless vision is that it can be done on mostly (if not entirely) commodity hardware. Because the state is in the architecture instead of an individual machine, Birnbaum says you can buy cheaper, less redundant and less specialized hardware, ditching failed machines and putting the work elsewhere without worry.

One of the big business benefits of stateless computing at Merrill Lynch is that it lets the financial services leader maximize utilization of existing resources. If someone needs 2,000 servers for an exotic derivatives grid and the company is only at 31 percent utilization, it has that spare capacity and doesn’t have to buy those additional servers, Birnbaum explained. Offering some insight into the financial mindset, Birnbaum added that Merrill Lynch buys new servers when it reaches 80 percent utilization, therefore ensuring a capacity cushion in case there is a spike.

Speaking less about a real-world internal cloud deployment and more about the building blocks of cloud computing was Appistry‘s Sam Charrington. One of his key takeaways was that while virtualization is among cloud computing’s driving technologies, a bunch of VMs does not equal a cloud. It’s great to be able to pull resources or machines from the air, Charrington explained, but the platform needs to know how to do it automatically.

Beyond getting comfortable with underlying technologies and paradigms like virtualization and SOA, Charrington also advised would-be cloud users to get familiar with public clouds like Amazon EC2, GoGrid and Google App Engine; inventory their applications to see what will work well in the cloud; and to get a small team together to plan for and figure out the migration.

Looking forward, Charrington says the cloud landscape will consist not only of the oft-discussed public clouds like EC2, but also will include virtual private clouds for specific types of applications/industries (like a HIPAA cloud for the medical field) and private, inside-the-firewall clouds. Citing The 451 Group’s Rachel Chalmers, Charrington said the best CIOs will be the ones who can best place applications within and leverage this variety of cloud options.

The cloud also was the focus of grid computing veteran Ravi Subramaniam, principal engineer in the Digital Enterprise Group at Intel. Subramaniam led his presentation by noting that cloud computing is not “computing in the clouds,” mainly because whether it is done externally or internally, cloud computing is inherently organized, and users know the provider — be it Amazon, Google or your own IT department. Illustrating a sort of cloud version of Newton’s third law, Subramaniam pointed out that for every one of cloud computing’s cons, there is an equally compelling pro: security issues exist, but CAPEX and OPEX savings can be drastic; end-users might have limited control of the resources, but those resources are simple to use by design; and so on.

Subramaniam focused a good portion of his talk on the relationship between grid computing and cloud computing, positing that the two aren’t as different as many believe. However, he noted, coming to this conclusion requires viewing grid as a broad, service-oriented solution rather than something narrow and application-specific. In their ideal form, he explained, grids are about managing workloads and infrastructure in the same framework, as well as about matching workloads to resources and vice versa.

For all of its strengths, though, grid computing does have its weaknesses, among which Subramaniam cites a lack of straightforwardness in applying and limited usefulness in small-scale environments. Cloud computing attempts to simplify grid from the user level, he said, which means utilizing a uniform application model, using the Web for access, using virtualization to mask complexity and using a “declarative” paradigm to simplify interaction. Essentially, Subramaniam summated, the cloud is where grid wanted to go.

If users approach both cloud computing and grid computing with an open mind and applying broad definitions, they will see that the synergies between the two paradigms are quite strong. The combination of grid and cloud technologies, Subramaniam says, means virtualization, aggregation and partitioning as needed, a pool of resources that can flex and adapt as needed, and even the ability to leverage external clouds to augment existing resources.

Virtualization 2.0

Of course, cloud computing wasn’t the only topic being discussed at NGDC, and one of particular interest to me was the concept of “virtualization 2.0.” In a discussion moderated by analyst Dan Kuznetsky, the panelists — Greg O’Connor of Trident Systems, Larry Stein of Scalent Systems, Jonah Paransky of StackSafe and Albert Lee of Xkoto — all seemed to agree that Virtualization 2.0 is about moving production jobs into virtual environments, moving beyond the hypervisor and delivering real business solutions to real business problems.

But the real discussion revolved around what is driving advances in virtualization. Xkoto is a provider of database virtualization, and Lee said he has noticed that the first round of virtualization raised expectations around provisioning, failover and consolidation, and now users want more. In the usually-grounded database space, he noted, even DBAs are demanding results like their comrades in other tiers have seen.

Another area where expectations have increased is availability, said StackSafe’s Paransky. While it used to be only transaction-processing systems at big banks that demanded continuous availability, Paransky quipped (although not without an element of truth) that it’s now considered a disaster if e-mail goes down for five minutes — and God forbid Twitter should go down. People just expect their systems and applications will always be available, and they’re expecting virtualization to help them get there.

Lee added that once you jump in, you have to swim, and users want to continue to invest in virtualization technologies.

However, there are inhibitors. Lee contends that adopters of server virtualization solely for the sake of consolidation risk backing themselves into a corner by relying on fewer boxes to run the same number of applications. If one box goes down, he noted, the effect is that much greater.

Fear of change also seems to be inhibiting further virtualization adoption. Scalent’s Stein said companies see the value of virtualization, but getting them to overcome legacy policies around new technology can be difficult. What’s more, he added, is that it’s not as easy as just ripping and replacing — virtualization needs to work with existing datacenters. Paransky echoed this concern, noting that virtualization can mean uncontrolled change, which is especially scary to organizations with solid change management systems.

Also, he noted, Virtualization 1.0 isn’t exactly past-tense, as 70-80 percent of IT dollars are spent on what already is there. Paransky assured the room that although they’re not sexy, people still have mainframes because of this compulsion to improve or keep up existing systems rather than move to new ones.

Moderator Kuznetsky was not oblivious to these obstacles, asking the panel what will drive organizations to actually make the leap to Virtualization 2.0, especially considering the general rule that organizations hate to change anything or adopt new technologies. Xkoto’s Lee commented that the IT world responds to pain, resisting change for the sake of change and holding out until there are real pain points.

Paransky took a more forceful stance, stating the organizations no longer have the luxury to resist change like they did in the past. Customers pay the bills, he says, and they don’t like the turtle-like pace of change — they want dynamism. He noted, however, that organizations don’t hate change because they think it is bad, but rather because it brings risk. The trick is balancing the benefits that virtualization can bring with the needs to keep things up and running.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HPC Career Notes: July 2020 Edition

July 1, 2020

In this monthly feature, we'll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it's a promotion, new company hire, or even an accolade, we've got Read more…

By Mariana Iriarte

Supercomputers Enable Radical, Promising New COVID-19 Drug Development Approach

July 1, 2020

Around the world, innumerable supercomputers are sifting through billions of molecules in a desperate search for a viable therapeutic to treat COVID-19. Those molecules are pulled from enormous databases of known compoun Read more…

By Oliver Peckham

HPC-Powered Simulations Reveal a Looming Climatic Threat to Vital Monsoon Seasons

June 30, 2020

As June draws to a close, eyes are turning to the latter half of the year – and with it, the monsoon and hurricane seasons that can prove vital or devastating for many of the world’s coastal communities. Now, climate Read more…

By Oliver Peckham

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This year is no different though the conversion of ISC to a digital Read more…

By John Russell

What’s New in HPC Research: Mosquitoes, [email protected], the Last Journey & More

June 29, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

AWS Solution Channel

Maxar Builds HPC on AWS to Deliver Forecasts 58% Faster Than Weather Supercomputer

When weather threatens drilling rigs, refineries, and other energy facilities, oil and gas companies want to move fast to protect personnel and equipment. And for firms that trade commodity shares in oil, precious metals, crops, and livestock, the weather can significantly impact their buy-sell decisions. Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Recent U.S. events, most poignantly the killing of George Floy Read more…

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

ISC 2020 Keynote: Hope for the Future, Praise for Fugaku and HPC’s Pandemic Response

June 24, 2020

In stark contrast to past years Thomas Sterling’s ISC20 keynote today struck a more somber note with the COVID-19 pandemic as the central character in Sterling’s annual review of worldwide trends in HPC. Better known for his engaging manner and occasional willingness to poke prickly egos, Sterling instead strode through the numbing statistics associated... Read more…

By John Russell

ISC 2020’s Student Cluster Competition Winners Announced

June 24, 2020

Normally, the Student Cluster Competition involves teams of students building real computing clusters on the show floors of major supercomputer conferences and Read more…

By Oliver Peckham

Hoefler’s Whirlwind ISC20 Virtual Tour of ML Trends in 9 Slides

June 23, 2020

The ISC20 experience this year via livestreaming and pre-recordings is interesting and perhaps a bit odd. That said presenters’ efforts to condense their comments makes for economic use of your time. Torsten Hoefler’s whirlwind 12-minute tour of ML is a great example. Hoefler, leader of the planned ISC20 Machine Learning... Read more…

By John Russell

At ISC, the Fight Against COVID-19 Took the Stage – and Yes, Fugaku Was There

June 23, 2020

With over nine million infected and nearly half a million dead, the COVID-19 pandemic has seized the world’s attention for several months. It has also dominat Read more…

By Oliver Peckham

Japan’s Fugaku Tops Global Supercomputing Rankings

June 22, 2020

A new Top500 champ was unveiled today. Supercomputer Fugaku, the pride of Japan and the namesake of Mount Fuji, vaulted to the top of the 55th edition of the To Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

Contributors

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This