Timesharing 2.0

By Steve Campbell

November 3, 2009

Cloud computing: Is there anything new to say? A fair question as it seems that hardly a week, or even a day, goes by without a new announcement about some new product or service for the “cloud.” When you read about cloud computing in The Economist, BusinessWeek or Forbes, you know something is really happening. Further evidence of this is the series of IBM prime time TV ads extolling the virtues of cloud computing. The technology has become mainstream.

One of the reasons business publications are writing about the cloud is because the technology is breaking out from its roots in high performance computing (HPC) and is being adopted for commercial applications. But is cloud computing today’s hot technology that promises to lower TCO, reduce energy costs, and enable dynamic or agile datacenters or is it just the latest hype? That is, will cloud computing really happen and will it deliver on its promises? And what does it mean for high performance computing?

Picture this: You’re sitting at a keyboard and you login to the system. Your ID is verified, which is good, and you begin to enter the data for your application need. When finished entering the data, the application begins executing your workload, along with many other users’ workloads. Eventually your workload completes and you receive the results together with a statement for CPU time, memory usage, disc I/O usage connect time, etc. A very comprehensive statement for all the services used. This method of access enables several other users to access the same system thus dramatically lowering the cost of computing, enabling organizations to use compute resources without owning them, and creating a development environment resulting in new applications being created.

Sound familiar? What I described was my experience using a computer system at a College in London, circa 1971. The era of timesharing had just begun. The computer system was in the datacenter (glass house) and utilized new technologies such virtualization, based on LPARs and domain, and workflow management.

In my mind, cloud computing today is Timesharing 2.0. What’s new? There are three basic differences 1) access, 2) standards, and 3) management/middleware software.

  1. Access today is from any Web-based device connected to the Internet; anytime, anywhere, any device has finally arrived.
  2. The use of standards-based software, connectivity, etc., enables heterogeneous systems to co-exist within the same cloud.
  3. Rich suites of management and middleware software and virtualization tools relieve the IT resource administration of the burden of managing this heterogeneous infrastructure and mapping workloads to infrastructure.

It’s that simple. Timesharing 2.0, better known as cloud computing, has arrived. Enough of the soapbox.

Cloud computing basics

Cloud computing is becoming ubiquitous and yet it is still evolving. Consequently, there is no accepted industry definition. Gartner defines cloud computing as “a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service to external customers using Internet technologies.”

Or try the Wikipedia definition:

Cloud computing is the provision of dynamically scalable and often virtualized resources as a service over the Internet on a utility basis. Users need not have knowledge of, expertise in, or control over the technology infrastructure in the “cloud” that supports them. Cloud computing services often provide common business applications online that are accessed from a web browser, while the software and data are stored on the servers.

The general consensus is that cloud computing has the following attributes:

  • Users can access their applications and data from any device connected to the Internet.
  • The concept generally incorporates a combination of the following:
    • Infrastructure as a Service (IaaS)
    • Platform as a Service (PaaS)
    • Software as a Service (SaaS)
  • It is frequently associated with virtualization and Web 2.0 technologies.
  • It exhibits elastic scaling – dynamic and fine grained.
  • Users can access large scale computing resources without making the heavy investment in IT infrastructure.
  • Users can access IT resources as utility service, pay-for-usage model — computing on demand.

The huge benefit of cloud computing is that companies can access the latest IT infrastructure for their workloads without having to make the huge investment in infrastructure; they can simply pay-for-usage. This is good for everyone, but for small and economically strapped firms, it is especially attractive.

One of the key software technologies is virtualization. This is significantly different from Timesharing 1.0 where virtualization was proprietary and built into the hardware. Today virtualization is a fundamental technology that enables cloud computing resource provisioning, for example, in a heterogeneous environment. Based on industry standards and utilizing the x86 VT instruction to enhanced the performance and supports multiple operating systems. Hypervisor technology is enhanced by rich set of tools for from resource provisioning to live migration.

Delivery models

Cloud computing architects are faced with many decisions and choices when developing cloud deployment models. There are several different models that are accepted in the industry today:

  • Private Cloud: Operated solely by and for the organization.
  • Public Cloud: Available to the general public on a pay-for-usage model.
  • Hybrid Cloud: A composition of private and public clouds.

There are infrastructure delivery models for seasonal fluctuations, for example, at tax time. In such models, companies with private clouds open up part of their infrastructure, creating public clouds to manage seasonal traffic.

Trends

IT vendors will continue to evolve their product lines and develop more “marketingware” as they strive for defining their uniqueness, value add and messaging. Many of them need a lot of help in differentiating themselves.

But there are a number of offerings from existing vendors that are worth watching:

The datacenter-in-a-box or container. This is a self contained IT datacenter that is delivered in a container such as Sun’s Modular Datacenter or Verari’s FOREST Container. These container-based datacenters can provide almost instant datacenter capacity for today’s cloud computing infrastructure. Designed to be eco-friendly, cost effective, and flexible.

The traditional approach. Solutions like IBM’s Cloudburst, based on IBM’s BladeCenter, or HP’s BladeSystem Matrix are conventional blade designs that can serve as cloud infrastructure. These datacenter-in-a-rack solutions can help organizations drive down the complexity and growing operating costs in particular reduce their OPEX utilities cost by delivering true green computing solutions.

Management and middleware software. Simplifying the deployment and operation of hardware (servers, storage, and networking) is the critical glue that makes the cloud model possible. The model is dependent upon this software to hide the complexity of the underlying infrastructure for the end user. For the IT organizations that are building and delivering cloud services the benefits of rich software tools will ease their task while reducing time to deploy services and simplify management.

Security. The protection of data and algorithms is perhaps the biggest concern end users have regarding cloud computing. Cybercrime is on the rise despite efforts to thwart the hackers. As consumer technology, social networking and Web 2.0 continue their rapid adoption in the workplace building secure cloud IT infrastructure is becoming more and more difficult. The best advice here is to design in security before you start building and deploying services. Don’t wait for a breach in security before taking action. Do your research.

Service. We’re starting to see third party compute cycle brokers emerge. Nimbis Services, for example, connects its clients through an industry wide brokerage and clearinghouse with 3rd party compute resources, commercial application software and expertise. The goal is to reduce risk and provide pay-as-you-go. Match users with resources.

Hybrid architectures. Over the past three or four decades HPC computing has seen many architecture to solve complex scientific workloads, we’ve seen the big SMP nodes, vector supercomputers such as Cray and mini-supercomputers such as Convex change the price performance dynamics of HPC. We have also seen numerous MPP systems. The rise of powerful commodity chipsets changed the market forever and gave birth to the distributed cluster and Grid architectures, connected via high speed network fabrics. The one architecture that survived is, Symmetric Multi-Processor (SMP), where multiple CPUs access a large shared memory, typically ccNUMA, with a single OS instance. Today that architecture is at the chip level with the x86 chipsets form Intel and AMD being multicore and 64-bit, they are SMP on a chip.

For example, Convey Computer’s server architecture combines the familiar world of x86 computing with hardware-based, application-specific instructions to accelerate certain HPC applications. Another approach to hybrid computing is that provided by vendors such as 3Leaf Systems and ScaleMP. These solutions enables a group of x86 servers to look like one big SMP system with a single pool of CPU processing and memory that can be dynamically allocated and/or repurposed to applications as needed. Essentially it turns a distributed architecture into a ccNUMA SMP.

Storage and networking. Most analysts confirm that storage is doubling every eighteen months. HPC workloads, in particular, have huge storage needs that can stress the system. There are developments such as the recent Panasas and Penguin partnership to provide high-performance parallel storage and on demand services designed specifically for high performance computing. Amazon S3 (Simple Storage Service) is an online storage web service offered by Amazon Web Services providing unlimited storage through a simple web services interface.

In the network arena, InfiniBand continues to increase its market penetration due to lower price points and a more mature software ecosystem. More interesting, however, is that several vendors are now building InfiniBand capabilities into their HPC-focused cloud solutions.

The increase demand for network performance is driven by HPC application demands and the new generations of x86 chips are able to fully utilize 10 Gigabit Ethernet (10GigE). Performance demand coupled with increased volumes of data creates the perfect storm for 10GigE adoption. One final comment on networking is the expected growth in converged network adapters (CNA) and Fiber Channel over Ethernet (FCoE). Both these offer the benefits of reduced costs and higher throughput.

How big is the opportunity?

For the vendors of products and services, the growth opportunity is large and growing rapidly. In some cases, it is hard to get any attention to your offerings if you do not have the name cloud associated with the product or service.

At the International Supercomputing Conference (ISC’09) in June 2009, Platform Computing surveyed IT executives who attended the conference. Over a quarter (28 percent) of IT executives surveyed are planning to deploy private clouds in 2009. Increased workload demand of applications and the need for IT to cut cost are cited as two major factors behind the planned adoption of HPC clouds.

The traditional analyst firms that specialize in market sizing and growth are predicting a bright future for IT infrastructure and services in the cloud. One of the most recent forecasts is in an October 2009 IDC Exchange blog titled IDC’s New IT Cloud Services Forecast: 2009-2013. In this post, IDC is forecasting that “the five year growth outlook remains strong, with a five-year annual growth rate of 26 percent — over six times the rate of traditional IT offerings.” Full details will be published in the upcoming IDC’s Cloud Services: Global Overview.

The HPC connection

For the high performance computing space, there are a growing number of companies and organizations providing services that target the special needs of this group of users. Our companion article encapsulates the vendors that are addressing this market today.

The HPC research community is also on board. In February of this year, UC Berkeley researchers released a report (PDF) discussing the impact and future directions of cloud computing. It served as a one of the first academic treatises on the subject. Eight months later, the US Department of Energy launched a five-year, $32 million program to study how scientific codes can make use of cloud technology. That work will take place at the DOE’s Argonne and Berkeley national laboratories.

Conclusion

Cloud computing is not new; it is largely an evolution of IT infrastructure. The pay-as-you-go model of cloud computing has its roots in the timesharing era of 1970s. As such, we are seeing cloud computing grow from a promising business concept to one of the fastest growing segments of the IT industry.

Organizations with challenging workload profiles or recession-hit companies are realizing they can access best-in-breed applications and infrastructure easily quickly and on a pay-for-usage basis. This now includes HPC users, who are looking to the cloud to maximize their FLOPS per dollar.

About the Author

Steve Campbell, an HPC Industry Consultant and HPC/Cloud Evangelist, has held senior VP positions in product management and product marketing for HPC and Enterprise vendors. Campbell has served in the vice president of marketing capacity for Hitachi, Sun Microsystems, FPS Computing and has also had lead marketing roles in Convex Computer Corporation and Scientific Computer Systems. Campbell has also served on the boards of and as interim CEO/CMO of several early-stage technology companies.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

CMU’s Latest “Card Shark” – Libratus – is Beating the Poker Pros (Again)

January 20, 2017

It’s starting to look like Carnegie Mellon University has a gambling problem – can’t stay away from the poker table. Read more…

By John Russell

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Weekly Twitter Roundup (Jan. 19, 2017)

January 19, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale

January 19, 2017

France’s CEA and Japan’s RIKEN institute announced a multi-faceted five-year collaboration to advance HPC generally and prepare for exascale computing. Among the particulars are efforts to: build out the ARM ecosystem; work on code development and code sharing on the existing and future platforms; share expertise in specific application areas (material and seismic sciences for example); improve techniques for using numerical simulation with big data; and expand HPC workforce training. It seems to be a very full agenda. Read more…

By Nishi Katsuya and John Russell

HPE Extreme Performance Solutions

Remote Visualization: An Integral Technology for Upstream Oil & Gas

As the exploration and production (E&P) of natural resources evolves into an even more complex and vital task, visualization technology has become integral for the upstream oil and gas industry. Read more…

ARM Waving: Attention, Deployments, and Development

January 18, 2017

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s Riken announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December. Read more…

By John Russell

Women Coders from Russia, Italy, and Poland Top Study

January 17, 2017

According to a study posted on HackerRank today the best women coders as judged by performance on HackerRank challenges come from Russia, Italy, and Poland. Read more…

By John Russell

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

January 17, 2017

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur. Read more…

By Doug Black

Weekly Twitter Roundup (Jan. 12, 2017)

January 12, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale

January 19, 2017

France’s CEA and Japan’s RIKEN institute announced a multi-faceted five-year collaboration to advance HPC generally and prepare for exascale computing. Among the particulars are efforts to: build out the ARM ecosystem; work on code development and code sharing on the existing and future platforms; share expertise in specific application areas (material and seismic sciences for example); improve techniques for using numerical simulation with big data; and expand HPC workforce training. It seems to be a very full agenda. Read more…

By Nishi Katsuya and John Russell

ARM Waving: Attention, Deployments, and Development

January 18, 2017

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s Riken announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December. Read more…

By John Russell

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

January 17, 2017

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur. Read more…

By Doug Black

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

UberCloud Cites Progress in HPC Cloud Computing

January 10, 2017

200 HPC cloud experiments, 80 case studies, and a ton of hands-on experience gained, that’s the harvest of four years of UberCloud HPC Experiments. Read more…

By Wolfgang Gentzsch and Burak Yenier

A Conversation with Women in HPC Director Toni Collis

January 6, 2017

In this SC16 video interview, HPCwire Managing Editor Tiffany Trader sits down with Toni Collis, the director and founder of the Women in HPC (WHPC) network, to discuss the strides made since the organization’s debut in 2014. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Leading Solution Providers

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

  • arrow
  • Click Here for More Headlines
  • arrow
Share This