Timesharing 2.0

By Steve Campbell

November 3, 2009

Cloud computing: Is there anything new to say? A fair question as it seems that hardly a week, or even a day, goes by without a new announcement about some new product or service for the “cloud.” When you read about cloud computing in The Economist, BusinessWeek or Forbes, you know something is really happening. Further evidence of this is the series of IBM prime time TV ads extolling the virtues of cloud computing. The technology has become mainstream.

One of the reasons business publications are writing about the cloud is because the technology is breaking out from its roots in high performance computing (HPC) and is being adopted for commercial applications. But is cloud computing today’s hot technology that promises to lower TCO, reduce energy costs, and enable dynamic or agile datacenters or is it just the latest hype? That is, will cloud computing really happen and will it deliver on its promises? And what does it mean for high performance computing?

Picture this: You’re sitting at a keyboard and you login to the system. Your ID is verified, which is good, and you begin to enter the data for your application need. When finished entering the data, the application begins executing your workload, along with many other users’ workloads. Eventually your workload completes and you receive the results together with a statement for CPU time, memory usage, disc I/O usage connect time, etc. A very comprehensive statement for all the services used. This method of access enables several other users to access the same system thus dramatically lowering the cost of computing, enabling organizations to use compute resources without owning them, and creating a development environment resulting in new applications being created.

Sound familiar? What I described was my experience using a computer system at a College in London, circa 1971. The era of timesharing had just begun. The computer system was in the datacenter (glass house) and utilized new technologies such virtualization, based on LPARs and domain, and workflow management.

In my mind, cloud computing today is Timesharing 2.0. What’s new? There are three basic differences 1) access, 2) standards, and 3) management/middleware software.

  1. Access today is from any Web-based device connected to the Internet; anytime, anywhere, any device has finally arrived.
  2. The use of standards-based software, connectivity, etc., enables heterogeneous systems to co-exist within the same cloud.
  3. Rich suites of management and middleware software and virtualization tools relieve the IT resource administration of the burden of managing this heterogeneous infrastructure and mapping workloads to infrastructure.

It’s that simple. Timesharing 2.0, better known as cloud computing, has arrived. Enough of the soapbox.

Cloud computing basics

Cloud computing is becoming ubiquitous and yet it is still evolving. Consequently, there is no accepted industry definition. Gartner defines cloud computing as “a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service to external customers using Internet technologies.”

Or try the Wikipedia definition:

Cloud computing is the provision of dynamically scalable and often virtualized resources as a service over the Internet on a utility basis. Users need not have knowledge of, expertise in, or control over the technology infrastructure in the “cloud” that supports them. Cloud computing services often provide common business applications online that are accessed from a web browser, while the software and data are stored on the servers.

The general consensus is that cloud computing has the following attributes:

  • Users can access their applications and data from any device connected to the Internet.
  • The concept generally incorporates a combination of the following:
    • Infrastructure as a Service (IaaS)
    • Platform as a Service (PaaS)
    • Software as a Service (SaaS)
  • It is frequently associated with virtualization and Web 2.0 technologies.
  • It exhibits elastic scaling – dynamic and fine grained.
  • Users can access large scale computing resources without making the heavy investment in IT infrastructure.
  • Users can access IT resources as utility service, pay-for-usage model — computing on demand.

The huge benefit of cloud computing is that companies can access the latest IT infrastructure for their workloads without having to make the huge investment in infrastructure; they can simply pay-for-usage. This is good for everyone, but for small and economically strapped firms, it is especially attractive.

One of the key software technologies is virtualization. This is significantly different from Timesharing 1.0 where virtualization was proprietary and built into the hardware. Today virtualization is a fundamental technology that enables cloud computing resource provisioning, for example, in a heterogeneous environment. Based on industry standards and utilizing the x86 VT instruction to enhanced the performance and supports multiple operating systems. Hypervisor technology is enhanced by rich set of tools for from resource provisioning to live migration.

Delivery models

Cloud computing architects are faced with many decisions and choices when developing cloud deployment models. There are several different models that are accepted in the industry today:

  • Private Cloud: Operated solely by and for the organization.
  • Public Cloud: Available to the general public on a pay-for-usage model.
  • Hybrid Cloud: A composition of private and public clouds.

There are infrastructure delivery models for seasonal fluctuations, for example, at tax time. In such models, companies with private clouds open up part of their infrastructure, creating public clouds to manage seasonal traffic.

Trends

IT vendors will continue to evolve their product lines and develop more “marketingware” as they strive for defining their uniqueness, value add and messaging. Many of them need a lot of help in differentiating themselves.

But there are a number of offerings from existing vendors that are worth watching:

The datacenter-in-a-box or container. This is a self contained IT datacenter that is delivered in a container such as Sun’s Modular Datacenter or Verari’s FOREST Container. These container-based datacenters can provide almost instant datacenter capacity for today’s cloud computing infrastructure. Designed to be eco-friendly, cost effective, and flexible.

The traditional approach. Solutions like IBM’s Cloudburst, based on IBM’s BladeCenter, or HP’s BladeSystem Matrix are conventional blade designs that can serve as cloud infrastructure. These datacenter-in-a-rack solutions can help organizations drive down the complexity and growing operating costs in particular reduce their OPEX utilities cost by delivering true green computing solutions.

Management and middleware software. Simplifying the deployment and operation of hardware (servers, storage, and networking) is the critical glue that makes the cloud model possible. The model is dependent upon this software to hide the complexity of the underlying infrastructure for the end user. For the IT organizations that are building and delivering cloud services the benefits of rich software tools will ease their task while reducing time to deploy services and simplify management.

Security. The protection of data and algorithms is perhaps the biggest concern end users have regarding cloud computing. Cybercrime is on the rise despite efforts to thwart the hackers. As consumer technology, social networking and Web 2.0 continue their rapid adoption in the workplace building secure cloud IT infrastructure is becoming more and more difficult. The best advice here is to design in security before you start building and deploying services. Don’t wait for a breach in security before taking action. Do your research.

Service. We’re starting to see third party compute cycle brokers emerge. Nimbis Services, for example, connects its clients through an industry wide brokerage and clearinghouse with 3rd party compute resources, commercial application software and expertise. The goal is to reduce risk and provide pay-as-you-go. Match users with resources.

Hybrid architectures. Over the past three or four decades HPC computing has seen many architecture to solve complex scientific workloads, we’ve seen the big SMP nodes, vector supercomputers such as Cray and mini-supercomputers such as Convex change the price performance dynamics of HPC. We have also seen numerous MPP systems. The rise of powerful commodity chipsets changed the market forever and gave birth to the distributed cluster and Grid architectures, connected via high speed network fabrics. The one architecture that survived is, Symmetric Multi-Processor (SMP), where multiple CPUs access a large shared memory, typically ccNUMA, with a single OS instance. Today that architecture is at the chip level with the x86 chipsets form Intel and AMD being multicore and 64-bit, they are SMP on a chip.

For example, Convey Computer’s server architecture combines the familiar world of x86 computing with hardware-based, application-specific instructions to accelerate certain HPC applications. Another approach to hybrid computing is that provided by vendors such as 3Leaf Systems and ScaleMP. These solutions enables a group of x86 servers to look like one big SMP system with a single pool of CPU processing and memory that can be dynamically allocated and/or repurposed to applications as needed. Essentially it turns a distributed architecture into a ccNUMA SMP.

Storage and networking. Most analysts confirm that storage is doubling every eighteen months. HPC workloads, in particular, have huge storage needs that can stress the system. There are developments such as the recent Panasas and Penguin partnership to provide high-performance parallel storage and on demand services designed specifically for high performance computing. Amazon S3 (Simple Storage Service) is an online storage web service offered by Amazon Web Services providing unlimited storage through a simple web services interface.

In the network arena, InfiniBand continues to increase its market penetration due to lower price points and a more mature software ecosystem. More interesting, however, is that several vendors are now building InfiniBand capabilities into their HPC-focused cloud solutions.

The increase demand for network performance is driven by HPC application demands and the new generations of x86 chips are able to fully utilize 10 Gigabit Ethernet (10GigE). Performance demand coupled with increased volumes of data creates the perfect storm for 10GigE adoption. One final comment on networking is the expected growth in converged network adapters (CNA) and Fiber Channel over Ethernet (FCoE). Both these offer the benefits of reduced costs and higher throughput.

How big is the opportunity?

For the vendors of products and services, the growth opportunity is large and growing rapidly. In some cases, it is hard to get any attention to your offerings if you do not have the name cloud associated with the product or service.

At the International Supercomputing Conference (ISC’09) in June 2009, Platform Computing surveyed IT executives who attended the conference. Over a quarter (28 percent) of IT executives surveyed are planning to deploy private clouds in 2009. Increased workload demand of applications and the need for IT to cut cost are cited as two major factors behind the planned adoption of HPC clouds.

The traditional analyst firms that specialize in market sizing and growth are predicting a bright future for IT infrastructure and services in the cloud. One of the most recent forecasts is in an October 2009 IDC Exchange blog titled IDC’s New IT Cloud Services Forecast: 2009-2013. In this post, IDC is forecasting that “the five year growth outlook remains strong, with a five-year annual growth rate of 26 percent — over six times the rate of traditional IT offerings.” Full details will be published in the upcoming IDC’s Cloud Services: Global Overview.

The HPC connection

For the high performance computing space, there are a growing number of companies and organizations providing services that target the special needs of this group of users. Our companion article encapsulates the vendors that are addressing this market today.

The HPC research community is also on board. In February of this year, UC Berkeley researchers released a report (PDF) discussing the impact and future directions of cloud computing. It served as a one of the first academic treatises on the subject. Eight months later, the US Department of Energy launched a five-year, $32 million program to study how scientific codes can make use of cloud technology. That work will take place at the DOE’s Argonne and Berkeley national laboratories.

Conclusion

Cloud computing is not new; it is largely an evolution of IT infrastructure. The pay-as-you-go model of cloud computing has its roots in the timesharing era of 1970s. As such, we are seeing cloud computing grow from a promising business concept to one of the fastest growing segments of the IT industry.

Organizations with challenging workload profiles or recession-hit companies are realizing they can access best-in-breed applications and infrastructure easily quickly and on a pay-for-usage basis. This now includes HPC users, who are looking to the cloud to maximize their FLOPS per dollar.

About the Author

Steve Campbell, an HPC Industry Consultant and HPC/Cloud Evangelist, has held senior VP positions in product management and product marketing for HPC and Enterprise vendors. Campbell has served in the vice president of marketing capacity for Hitachi, Sun Microsystems, FPS Computing and has also had lead marketing roles in Convex Computer Corporation and Scientific Computer Systems. Campbell has also served on the boards of and as interim CEO/CMO of several early-stage technology companies.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: September (Part 1)

September 18, 2018

In this new bimonthly feature, HPCwire will highlight newly published research in the high-performance computing community and related domains. From exascale to quantum computing, the details are here. Check back every Read more…

By Oliver Peckham

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and development. Among other things it would establish a National Quantu Read more…

By John Russell

Nvidia Accelerates AI Inference in the Datacenter with T4 GPU

September 14, 2018

Nvidia is upping its game for AI inference in the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based Tesla T4 GPU--and a refresh of its inference server software packaged as Read more…

By George Leopold

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

A Crystal Ball for HPC

People are notoriously bad at predicting the future.  This very much includes experts. In the Forbes article “Why Most Predictions Are So Bad” Philip Tetlock discusses the largest and best-known test of the accuracy of expert predictions which show that any experts would do better if they make random guesses. Read more…

NSF Highlights Expanded Efforts for Broadening Participation in Computing

September 13, 2018

Today, the Directorate of Computer and Information Science and Engineering (CISE) of the NSF released a letter highlighting the expansion of its broadening participation in computing efforts. The letter was penned by Jam Read more…

By Staff

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Nvidia Accelerates AI Inference in the Datacenter with T4 GPU

September 14, 2018

Nvidia is upping its game for AI inference in the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based Tesla T4 GPU- Read more…

By George Leopold

DeepSense Combines HPC and AI to Bolster Canada’s Ocean Economy

September 13, 2018

We often hear scientists say that we know less than 10 percent of the life of the oceans. This week, IBM and a group of Canadian industry and government partner Read more…

By Tiffany Trader

Rigetti (and Others) Pursuit of Quantum Advantage

September 11, 2018

Remember ‘quantum supremacy’, the much-touted but little-loved idea that the age of quantum computing would be signaled when quantum computers could tackle Read more…

By John Russell

How FPGAs Accelerate Financial Services Workloads

September 11, 2018

While FSI companies are unlikely, for competitive reasons, to disclose their FPGA strategies, James Reinders offers insights into the case for FPGAs as accelerators for FSI by discussing performance, power, size, latency, jitter and inline processing. Read more…

By James Reinders

Update from Gregory Kurtzer on Singularity’s Push into FS and the Enterprise

September 11, 2018

Container technology is hardly new but it has undergone rapid evolution in the HPC space in recent years to accommodate traditional science workloads and HPC systems requirements. While Docker containers continue to dominate in the enterprise, other variants are becoming important and one alternative with distinctly HPC roots – Singularity – is making an enterprise push targeting advanced scale workload inclusive of HPC. Read more…

By John Russell

At HPC on Wall Street: AI-as-a-Service Accelerates AI Journeys

September 10, 2018

AIaaS – artificial intelligence-as-a-service – is the technology discipline that eases enterprise entry into the mysteries of the AI journey while lowering Read more…

By Doug Black

No Go for GloFo at 7nm; and the Fujitsu A64FX post-K CPU

September 5, 2018

It’s been a news worthy couple of weeks in the semiconductor and HPC industry. There were several HPC relevant disclosures at Hot Chips 2018 to whet appetites Read more…

By Dairsie Latimer

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

AMD’s EPYC Road to Redemption in Six Slides

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

GPUs Power Five of World’s Top Seven Supercomputers

June 25, 2018

The top 10 echelon of the newly minted Top500 list boasts three powerful new systems with one common engine: the Nvidia Volta V100 general-purpose graphics proc Read more…

By Tiffany Trader

The Machine Learning Hype Cycle and HPC

June 14, 2018

Like many other HPC professionals I’m following the hype cycle around machine learning/deep learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectation’ but not quite yet starting the descent into the ‘trough of disillusionment. This still raises the probability that... Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This