Cisco’s Cloud CTO Clarifies Strategy, Describes Datacenters of the Future

By Nicole Hemsoth

January 24, 2011

Lew Tucker discusses the datacenter of the future, sheds light on the “many clouds” theory, and describes the perfect storm in computing that is leading to new paradigms in IT.

Although Cisco has a viable stake in the future of cloud computing, its position has been difficult to pin down, despite the fact that their Unified Computing System (UCS) server architecture and network commitments present a solid chance for them to have an impact on the market.

Other than a few scattered announcements and the publicized positioning of Lew Tucker (of Sun and Salesforce fame) as Cloud CTO nearly six months ago, Cisco has been reluctant to announce a full-blown strategy around how it plans to stake its claim in the arena. The relative silence was broken this past week, when the company finally revealed its approach somewhat formally in a video interview with Tucker.

In something of a “coming out party” for Cisco’s cloud roadmap for the future, Lew Tucker chatted at length about what role the company might play in a space that is still shaking out its winners and losers on the cloud computing front.

At the beginning of his tenure, Tucker restated the value of the network as the heart of cloud—a fact that he claims is overlooked in all of the hype and excitement over cloud computing. In the strategy interview, however, he expands on the role of the network in securely delivering applications and gives us a glimpse into his view of the datacenter of the future.

A World of Many Clouds

When asked about the vendor shakeout that is inevitable as the cloud market matures in coming years, Tucker stated that instead of seeing the mega-providers who stake a claim in all verticals, there will be a development of industry-specific clouds.

He notes that clouds will form around needs and communities, thus for example within the healthcare industry there will be a small throng of HIPPA-compliant clouds as well as similarly fine-tuned offerings with a keen eye on the regulatory and security needs of government, financial services and others.

In light of this concept of specialized clouds, Tucker stated that some of Cisco’s enterprise-class customers are looking at what types of enterprise-class private clouds can be hosted by service providers now.

Despite this focus on “many clouds” serving disparate needs-based communities, Lew Tucker feels that in the future there is a “much larger cloud on the horizon” that is visible when we step back and look at the breadth of connected devices that are available at the present—a number that is sure to grow. From automobiles to sensors to mobile devices of all shapes and sizes, this complexity and range provides “the greatest example for why networking is so critical to the cloud” and how security is now an even more pressing issue.

In Tucker’s view, “if we look at the growing number of connected devices, whether mobile devices or even sensors with electrical power meters or even in the automobile itself, those devices are increasingly connected to the internet. So now you have in essence a  mini-cloud driving around on the highway—this is the greatest example for why networking is so critical to the cloud, now we need to have the security associated with these networked devices”

The Datacenter of the Future

Revealing Cisco’s general strategy in cloud computing over the coming years, Tucker emphasized the dual, complimentary roles of networking and system architecture as key to changing the way datacenters are built.

In addition to providing a rough approach to helping new customers build clouds using a “building blocks” approach wherein the essential infrastructure components are provided as well as looking at the broad range of devices to see the diverse array of end users and needs, one of the most striking elements of Tucker’s talk was his vision of how datacenters, based on the cloud model, are set to change.

In Lew Tucker’s opinion, there are certain points of dramatic inefficiency in the way datacenters are built and managed. As he described:

“When you build out internal architectures where you put an application on a server with an operating system, and then you move to the next application—as you add more and more applications into the datacenter, each with their own individual architectures, you don’t get economies of scale, you get very low utilization, and you get enormous complexity because you’ve tied the applications to the infrastructure.

Instead, what we’re doing with cloud is we’re saying build a cloud over the infrastructure…turn the infrastructure itself into a service—in which now the applications become virtualized so they can pick whatever operating system they need, they’re running on a virtual machine, they can be turned on or off—they are essentially being provided on-demand.

This means that the IT organization at these future datacenters can scale large to get efficiencies that way and can become totally automated since the infrastructure’s main goal is simply to provide a pool of resources to be used by the applications. This is a much more efficient way to build out datacenters and drive down cost as well as increase agility.”

Tucker also explained the concept of the network as a platform, which in essence means “creating a network platform driven by programmatic APIs we can do things like automate and build systems like UCS which is driven by APIs. Now software itself can do all the provisioning. It’s no longer the individual switch or router, it’s the system that comprises the network that drives it.”

While it is not difficult to see the position of Cisco’s UCS in the cloud strategy-wise, Tucker explains that part of the strategy is to actually build APIs into any networkable product the company sells that will touch the cloud.

A Perfect Storm in Computing

Perhaps not surprisingly, Tucker sees the cloud as the product of a natural course in computing evolution, all with the network at the heart of progress.  He briefly tracks the crescendo in network and architecture innovation that has led to cloud computing on a ten-year graph, beginning in the 1960s with mainframes, the minicomputer in the 70s, the client server to web transition that took place from the 80s into the 90s, followed by virtualization in 2000—and into this new decade that is defined by cloud.

This progression on a decade-long chart is, in Tucker’s view, a movement that is simply the manifestation of the new internet, a natural extension of a movement that has been building and compounding, just as it has with other technological paradigm shifts.
 
In addition to faster, more ubiquitous access to networked devices that are providing this next opportunity in computing, Tucker describes the “perfect storm” that is brewing. These storm clouds are “forming between the continued advance of Moore’s Law, which is driving down the cost of computing, coupled with the explosive growth of the Internet, as well as technology advances like virtualization.” While he acknowledges that this new era is still dawning, there are signs that cloud computing is the next major shift in IT by pointing to cloud service providers like Amazon Web Services (AWS). 

Tucker argues that AWS is at the forefront of making it possible for web developers and small companies to get into cloud computing—and that this is changing the economics of computing in yet another way.

As Cisco’s Cloud CTO claims, “If you’re going to Sandhill Road to get money as a startup they’re not going to give you money for infrastructure; they’re going to say you go and buy it from the cloud—that way they lower their risk and there’s the pay-as-you-go model.”

In addition to seeing public cloud resources as driving business forward, Tucker also sees rapid movement in enterprise adoption of private clouds as companies see this trend, which is driven by economies of scale, and seek to take advantage of it.

The problem is, until this more refined model of datacenter innovation takes place as described, it will be difficult for enterprise datacenters to achieve the same cost benefit. This is where Cisco is making its play—in refining datacenter architecture to look more like the large cloud service providers versus the traditional model of infrastructure as the carrier of applications.

You can view the full interview with Tucker here.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Qualcomm Targets Intel Datacenter Dominance with 10nm ARM-based Server Chip

December 8, 2016

Claiming no less than a reshaping of the future of Intel-dominated datacenter computing, Qualcomm Technologies, the market leader in smartphone chips, announced the forthcoming availability of what it says is the world’s first 10nm processor for servers, based on ARM Holding’s chip designs. Read more…

By Doug Black

Which Schools Produce the Top Coders in the World?

December 8, 2016

Ever wonder which universities worldwide produce the best coders? The answers may surprise you, at least as judged by the results of a competition posted yesterday on the HackerRank blog. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines. Read more…

By John Russell

DDN Enables 50TB/Day Trans-Pacific Data Transfer for Yahoo Japan

December 6, 2016

Transferring data from one data center to another in search of lower regional energy costs isn’t a new concept, but Yahoo Japan is putting the idea into transcontinental effect with a system that transfers 50TB of data a day from Japan to the U.S., where electricity costs a quarter of the rates in Japan. Read more…

By Doug Black

Infographic Highlights Career of Admiral Grace Murray Hopper

December 5, 2016

Dr. Grace Murray Hopper (December 9, 1906 – January 1, 1992) was an early pioneer of computer science and one of the most famous women achievers in a field dominated by men. Read more…

By Staff

Ganthier, Turkel on the Dell EMC Road Ahead

December 5, 2016

Who is Dell EMC and why should you care? Glad you asked is Jim Ganthier’s quick response. Ganthier is SVP for validated solutions and high performance computing for the new (even bigger) technology giant Dell EMC following Dell’s acquisition of EMC in September. In this case, says Ganthier, the blending of the two companies is a 1+1 = 5 proposition. Not bad math if you can pull it off. Read more…

By John Russell

AWS Embraces FPGAs, ‘Elastic’ GPUs

December 2, 2016

A new instance type rolled out this week by Amazon Web Services is based on customizable field programmable gate arrays that promise to strike a balance between performance and cost as emerging workloads create requirements often unmet by general-purpose processors. Read more…

By George Leopold

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines. Read more…

By John Russell

Ganthier, Turkel on the Dell EMC Road Ahead

December 5, 2016

Who is Dell EMC and why should you care? Glad you asked is Jim Ganthier’s quick response. Ganthier is SVP for validated solutions and high performance computing for the new (even bigger) technology giant Dell EMC following Dell’s acquisition of EMC in September. In this case, says Ganthier, the blending of the two companies is a 1+1 = 5 proposition. Not bad math if you can pull it off. Read more…

By John Russell

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

HPE-SGI to Tackle Exascale and Enterprise Targets

November 22, 2016

At first blush, and maybe second blush too, Hewlett Packard Enterprise’s (HPE) purchase of SGI seems like an unambiguous win-win. SGI’s advanced shared memory technology, its popular UV product line (Hanna), deep vertical market expertise, and services-led go-to-market capability all give HPE a leg up in its drive to remake itself. Bear in mind HPE came into existence just a year ago with the split of Hewlett-Packard. The computer landscape, including HPC, is shifting with still unclear consequences. One wonders who’s next on the deal block following Dell’s recent merger with EMC. Read more…

By John Russell

Intel Details AI Hardware Strategy for Post-GPU Age

November 21, 2016

Last week at SC16, Intel revealed its product roadmap for embedding its processors with key capabilities and attributes needed to take artificial intelligence (AI) to the next level. Read more…

By Alex Woodie

Why 2016 Is the Most Important Year in HPC in Over Two Decades

August 23, 2016

In 1994, two NASA employees connected 16 commodity workstations together using a standard Ethernet LAN and installed open-source message passing software that allowed their number-crunching scientific application to run on the whole “cluster” of machines as if it were a single entity. Read more…

By Vincent Natoli, Stone Ridge Technology

IBM Advances Against x86 with Power9

August 30, 2016

After offering OpenPower Summit attendees a limited preview in April, IBM is unveiling further details of its next-gen CPU, Power9, which the tech mainstay is counting on to regain market share ceded to rival Intel. Read more…

By Tiffany Trader

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

Think Fast – Is Neuromorphic Computing Set to Leap Forward?

August 15, 2016

Steadily advancing neuromorphic computing technology has created high expectations for this fundamentally different approach to computing. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

HPE Gobbles SGI for Larger Slice of $11B HPC Pie

August 11, 2016

Hewlett Packard Enterprise (HPE) announced today that it will acquire rival HPC server maker SGI for $7.75 per share, or about $275 million, inclusive of cash and debt. The deal ends the seven-year reprieve that kept the SGI banner flying after Rackable Systems purchased the bankrupt Silicon Graphics Inc. for $25 million in 2009 and assumed the SGI brand. Bringing SGI into its fold bolsters HPE's high-performance computing and data analytics capabilities and expands its position... Read more…

By Tiffany Trader

ARM Unveils Scalable Vector Extension for HPC at Hot Chips

August 22, 2016

ARM and Fujitsu today announced a scalable vector extension (SVE) to the ARMv8-A architecture intended to enhance ARM capabilities in HPC workloads. Fujitsu is the lead silicon partner in the effort (so far) and will use ARM with SVE technology in its post K computer, Japan’s next flagship supercomputer planned for the 2020 timeframe. This is an important incremental step for ARM, which seeks to push more aggressively into mainstream and HPC server markets. Read more…

By John Russell

IBM Debuts Power8 Chip with NVLink and Three New Systems

September 8, 2016

Not long after revealing more details about its next-gen Power9 chip due in 2017, IBM today rolled out three new Power8-based Linux servers and a new version of its Power8 chip featuring Nvidia’s NVLink interconnect. Read more…

By John Russell

Leading Solution Providers

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Intel Launches Silicon Photonics Chip, Previews Next-Gen Phi for AI

August 18, 2016

At the Intel Developer Forum, held in San Francisco this week, Intel Senior Vice President and General Manager Diane Bryant announced the launch of Intel's Silicon Photonics product line and teased a brand-new Phi product, codenamed "Knights Mill," aimed at machine learning workloads. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Micron, Intel Prepare to Launch 3D XPoint Memory

August 16, 2016

Micron Technology used last week’s Flash Memory Summit to roll out its new line of 3D XPoint memory technology jointly developed with Intel while demonstrating the technology in solid-state drives. Micron claimed its Quantx line delivers PCI Express (PCIe) SSD performance with read latencies at less than 10 microseconds and writes at less than 20 microseconds. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Share This