Visit additional Tabor Communication Publications
August 28, 2008
How many times have your heard the word "ecosystem" in reference to the information technology market? Some people aren't comfortable with the terminology, but I think the analogy to the natural environment is near perfect. Ironically, the common use of the term is also an analogy. The prefix "eco" means house, not nature or environment, as people might assume. Regardless of the etymology, the interrelationships between hardware, software, and the market have many of the same characteristics as a biological ecosystem.
Because of the way the mass media covers environmental issues, one might get the impression that, unless humans are involved, all animals and plants live in perfect harmony with their environment. In truth, there is no such thing as a perfectly-adapted organism. Adaptation to the environment varies widely among species, and marginally adapted ones become candidates for extinction, with or without man's help. For an unromantic perspective of evolution, read Richard Dawkins' The Blind Watchmaker, a book that describes the random workings of natural selection. It's not always a pretty picture.
The IT ecosystem works on the same fundamental principals as the natural ecosystem. We have innovation (evolution) and market forces (environmental selection) determining the relative success of hardware and software products. The Itanium microprocessor and the Ada language were sound designs, but failed to thrive because of competition from more established technologies. The current dominance of the x86 architecture, the Windows and Linux operating systems, and the C/C++ and Java programming languages are the result of good matches between human capabilities, technology compatibilities, and applications.
We even have adaptation. The aforementioned Itanium chip was designed to be the dominant microprocessor species when it was first developed. But AMD quickly evolved the x86 into a 64-bit architecture that overran Itanium's territory. Despite this, Itanium survives today in the smaller niche of high-end servers.
The ascent of the x86 architecture took place at a time when the computing ecosystem was reasonably stable -- the 1970s through 1990s. Applications written in serial programming languages like C, Fortran, and COBOL automatically got faster with each passing year as clock speeds rose. Today the situation is different. The stagnation of clock speeds means processors must evolve to a multicore architecture and programming languages, libraries, middleware, and operating systems must evolve along with them.
This is one reason we're seeing more architectural diversity, x86 or otherwise. Every CPU vendor seems to be developing architectures with at least eight cores, while GPUs and the Cell processor are expanding their domain from the graphics space into the CPU arena.
At the Hot Chips conference this week, participants talked up their respective multicore wonders. Fujitsu previewed its eight-core Venus, the 128 gigaflop Sparc64 chip headed for enterprise servers and supercomputers in 2009, while China revealed plans to build a petaflop supercomputer in 2010, based on its homegrown four- and eight-core Godson 3 processors. Sun Microsystems confirmed that its 16-core Rock processor is on track, but won't arrive until the second half of 2009. Intel, of course, has been talking up its multicore Nehalem chips as well as its upcoming manycore Larrabee processor for months now.
The other reason for an increased diversity in the computing ecosystem is a new focus on visual and high performance computing -- two fast growing markets with some admitted overlaps. For these applications, the Cell processor, GPUs, and perhaps the Larrabee processor may be the new stars. At this week's NVISION conference in San Jose, NVIDIA attempted to position itself and its products at the nexus of this new programming paradigm, despite Intel's identical claim at IDF last week. If AMD could afford a multi-day event, I'm sure they'd be saying the same thing.
The diversity of parallel architectures is also reflected in new software frameworks. As I mentioned yesterday, development environments like CUDA (for GPUs), Intel's Threading Building Blocks (for multicore CPUs), and RapidMind's Platform (for both) have appeared just in the past couple of years. There are more being offered or on the drawing board (not to mention the traditional parallel programming interfaces like MPI and OpenMP). In fact, there is no doubt that there are many more parallel programming frameworks than there are parallel architectures -- a situation that will probably not endure. Like the natural ecosystem, the market selects the winners and discards the losers.
But the market also maintains some degree of diversity. Big players like Intel, IBM, and Microsoft are balanced with smaller players like AMD, Sun, and Red Hat so that choice is maintained. Ecosystem diversity, while at times confusing, is generally a good thing. By providing choice, the ecosystem's overall stability is enhanced, even at some cost in efficiency. And if the market environment changes quickly, a diverse ecosystem also insures that more vendors will be around to adapt. The ongoing concern about the Wintel near monoculture in the PC space points to peoples' uneasiness with a lack of diversity.
While generally very useful, standards such as programming interfaces, communication protocols, instructions sets, and hardware reference designs also work against ecosystem diversity. And standards, just like our genetic heritage, tend to accumulate over time. In HPC for example, the ubiquity of MPI and OpenMP codes means that newly devised parallel paradigms have to either incorporate these models into their design or be content to only go after new applications. Because standards become intimately tied to all applications, they become the collective DNA for the IT ecosystem. The problem is that developers are just human, but they end up playing God trying to figure out the best DNA to keep the ecosystem running optimally. And even though we're not that good at the God thing yet, we're still better off than the blind watchmaker.
Posted by Michael Feldman - August 27, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.