The Business of Disruptive Innovation

By Michael Feldman

November 14, 2010

Like every technology-based sector, high performance computing takes its biggest leaps by the force of disruptive innovation, a term coined by the man who will keynote this year’s Supercomputing Conference (SC10) in New Orleans. Clayton M. Christensen doesn’t know a whole lot about supercomputing, but he knows a great deal about the forces that drive it.

For the past 15 years, Christensen, a professor at the Harvard Business School, has been studying how technological innovation works, how it can drive some businesses to succeed, and how it can cause others to fail spectacularly. Today he is considered one of the leading experts on innovation. At SC10, he will attempt to impart some of this wisdom to the HPC faithful.

Not a techno-geek by any means, Christensen’s focus is on the businesses end of disruptive innovation. In 1997 he penned his first book on the subject, The Innovator’s Dilemma, wherein he describes the challenges of managing innovation. Since then he’s developed a set of well-respected theories on innovation and has published a number of other books that explore different aspects of the subject. HPCwire recently got the opportunity to speak with Christensen to ask him about his work and how his theories can apply to the high performance computing industry.

From Christensen’s perspective, disruptive innovation is not a technical idea, it encompasses a business model that is at the heart of how technology is delivered to the marketplace. In a nutshell, disruptive innovation represents a new value to the marketplace, and it usually emerges as a simpler and less expensive alternative to established technologies. But it is not a market-specific concept. The way Christensen has done his research is by studying how the innovation process works in a generic sense, not by studying an industry, like high performance computing, and then developing a theory that is specifically applicable to it.

According to Christensen, there’s a basic problem the way world is designed; data is only available from what happened in the past. And it’s convincingly available only about the distant past. So when managers make predictions about the future using historical data, it tends to be very unreliable.

So how is one to predict the future? The answer is theory, says the Harvard professor. “A really good theory gets down to the fundamental insight on why the world works the way it does,” explains Christensen. “You guys are scientists and engineers and use theories all of the time in the technical dimensions. But now there is a set of theories about the business side that are very valuable.”

The group Christensen works with at Harvard has spent years developing business management models that can help predict which kind of product, service or company is likely to be successful and which will likely fail. Some of his students have had some remarkable success applying this framework to real-life situations. For example, one of Christensen’s student successfully predicted the demise of Google’s Wave communication platform, an all-encompassing web-based communication tool that the search giant put on the shelf after just four months of user trial.

The HPC business, of course, lives and breathes in a world of disruptive technologies. From the “Attack of the Killer Micros” that all but wiped out custom processor-based supercomputing in the 1990s, to today’s emergence of general-purpose GPU computing, HPC seems especially prone to being reshaped by simpler technologies from below.

Which may explain why even established HPC players like IBM, Cray, and HP often struggle to make their supercomputing businesses profitable. The challenge for the industry leaders is that they need sustaining technologies to maintain their business model, says Christensen. Disruptive technologies are not good fits for market leaders, since these companies tend to cater to customers high up the food chain. In other words, the IBMs of the world need to continually create higher value products to feed their best clients. Alternatively, they can acquire other companies whose products match their existing customer base.

Christensen’s theories actually predict this type of business interaction quite well. For example, in the 1960s, X-ray technology was the only device that let doctors people peer inside the body. But in 1971, a British company called EMI launched computed tomography (CT), a high end technology which delivered superior imaging technology since it revealed soft tissues as well. Within a year the leaders of the X-ray technology — GE, Siemens and Phillips — developed better CT technology than EMI and eventually drove them out of business.

The next medical imaging technology was Magnetic Resonance Imaging (MRI), which turned out to be any even better way to look at certain structures inside the body. But again, the early developers of MRI technology were overtaken by GE, Siemens, and Phillips. For both CT and MRI devices, the established companies found they could sell them for even better profits than X-ray machines.

On the other hand, when ultrasound technology was developed, that was a different story. Ultrasound didn’t produce crystal clear images, but the devices were inexpensive and simple to operate. Therefore it could be purchased and used as standard equipment for doctors’ offices. GE, Siemens and Phillips bypassed the ultrasound market because the financial incentives were wrong for their business structure. So a whole new set of vendors emerged for ultrasound products. It was a true disruptive innovation.

If Christensen models had been applied to startups like ClearSpeed or SiCortex, they might have revealed the technologies they developed, as good as they were, did not fit the disruptive profile at all and also did not offer a sustaining technology for larger vendors. His theories might also have predicted the recent rash of HPC software tool acquisitions of Cilk Arts, Interactive Supercomputing, RapidMind, TotalView Technologies, Visual Numerics, and Acumem. All of these tool companies had sustaining technologies of value to the larger buyers, in this case, Intel, Microsoft, and Rogue Wave Software.

So what’s the next big disruptive technology? Christensen thinks it could very well be cloud computing. According to him, the cloud is setting itself up the be a countervailing force that will cut across the mainframe and high-end computing. As such, it has the potential to usurp the established business model of HPC. “The supercomputer leaders should watch out,” he warns.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as tech giants jockey to establish a pole position in the race toward commercialization of quantum. This week, Microsoft took the next step in advanci Read more…

By Tiffany Trader

ESnet Now Moving More Than 1 Petabyte/wk

December 12, 2017

Optimizing ESnet (Energy Sciences Network), the world's fastest network for science, is an ongoing process. Recently a two-year collaboration by ESnet users – the Petascale DTN Project – achieved its ambitious goal t Read more…

HPC-as-a-Service Finds Toehold in Iceland

December 11, 2017

While high-demand workloads (e.g., bitcoin mining) can overheat data center cooling capabilities, at least one data center infrastructure provider has announced an HPC-as-a-service offering that features 100 percent fre Read more…

By Doug Black

HPE Extreme Performance Solutions

Explore the Origins of Space with COSMOS and Memory-Driven Computing

From the formation of black holes to the origins of space, data is the key to unlocking the secrets of the early universe. Read more…

HPC Iron, Soft, Data, People – It Takes an Ecosystem!

December 11, 2017

Cutting edge advanced computing hardware (aka big iron) does not stand by itself. These computers are the pinnacle of a myriad of technologies that must be carefully woven together by people to create the computational c Read more…

By Alex R. Larzelere

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as tech giants jockey to establish a pole position in the race toward commercialization of Read more…

By Tiffany Trader

HPC Iron, Soft, Data, People – It Takes an Ecosystem!

December 11, 2017

Cutting edge advanced computing hardware (aka big iron) does not stand by itself. These computers are the pinnacle of a myriad of technologies that must be care Read more…

By Alex R. Larzelere

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Microsoft Spins Cycle Computing into Core Azure Product

December 5, 2017

Last August, cloud giant Microsoft acquired HPC cloud orchestration pioneer Cycle Computing. Since then the focus has been on integrating Cycle’s organization Read more…

By John Russell

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPE In-Memory Platform Comes to COSMOS

November 30, 2017

Hewlett Packard Enterprise is on a mission to accelerate space research. In August, it sent the first commercial-off-the-shelf HPC system into space for testing Read more…

By Tiffany Trader

SC17 Cluster Competition: Who Won and Why? Results Analyzed and Over-Analyzed

November 28, 2017

Everyone by now knows that Nanyang Technological University of Singapore (NTU) took home the highest LINPACK Award and the Overall Championship from the recently concluded SC17 Student Cluster Competition. We also already know how the teams did in the Highest LINPACK and Highest HPCG competitions, with Nanyang grabbing bragging rights for both benchmarks. Read more…

By Dan Olds

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This