Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
March 16, 2007

The End of Local Computing: A Remote Possibility?

Michael Feldman

If, as Sun Microsystems claims, “The Network is the Computer,” then why do I have to buy a new computer every few years? I used to think it was inevitable that we would all be using thin clients instead of PCs to access computing. I still believe this will happen, but I'm not as sure as I used to be.

On the surface, it makes no sense that we use PCs for the majority of our computational needs. Most people will typically keep their computer's processor busy for only a few percent of the time during the day, and none at all during the night. (Even a datacenter that has not consolidated its servers with virtualization does better than that — usually between 5 and 15 percent utilization, depending on who you talk to.)

And then there's the effort required to constantly deal with updating the operating system and applications; keeping the computer free of viruses, malware, and other security threats; and making sure data is backed up on a regular basis. Some of this has been automated, but it's still rather annoying to be constantly reminded that you need to upgrade/update/repurchase the latest version of some piece of software or other. Imagine if we had to maintain our TVs this way.

There's also the inconvenience of having your files locally stored on a variety of machines, which may not be easily accessible to one another. If I want to work on the same document at home, at work, or if I'm on vacation, that usually involves an extra layer of software to deal with. Why bother with all this when the Internet can provide a simple online environment for distributed computing?

The Internet is the obvious platform for remote computing, but as of yet it does very little of that. Mostly it distributes data and performs simple transactions. There are some web-hosted Microsoft Office-type application suites beginning to appear (Sun's StarOffice, Google Apps, and Zoho), but most people continue to use their PC as a fat client to the Internet, preferring to do activities like word processing and constructing spreadsheets locally.

So what's holding up remote computing? The same factors that act as conservative influences throughout IT: software momentum and user habits.

The sheer mass of data files that people depend on are tied to applications that run mostly on personal computers — things like Word documents, Excel spreadsheets and PowerPoint presentations. People have become used to the look-and-feel of using those programs to do their work. It's no coincidence that many of the web-hosted Office-type applications mimic the actual Microsoft versions as much as possible.

The large foundation of software already targeted to the Windows operating system encourages new application development in the same environment. Not only are large numbers of developers already familiar with the Windows API, but whole libraries and other Windows-based software components can be reused to build or upgrade applications. In contrast, Internet-based software frameworks, like AJAX (Asynchronous JavaScript and XML), are just starting to develop momentum.

At the same time, there's a hardware battle going on between local computing and remote computing. Local computing is encouraged by less costly processors, which is indirectly being driven by Moore's Law. Remote computing is favored by faster networks and is driven by advances in optical fiber communication. On the face of it, networks should be winning. Using a performance per dollar criteria, Moore's Law is doubling transistor density every 18 months or so, while optical fiber technology is doubling its bandwidth every 9 months (according to a 2001 Scientific American report).

If you're not careful with your reasoning, you might conclude that it will be more efficient to distribute data around a network of processors than to compute with the same data on a local processor. But this ignores the fact that data-intensive workloads are limited by the speed at which you can feed bytes into the processor. The on-chip networks that connect memory to processor and CPUs to each other will always be faster than the external networks. It also ignores the hidden costs associated with building an external network infrastructure that can take advantage of faster bandwidth technology. It has to do with the cost of deployment.

Networks depend upon a host of supporting gear — routers, modems, switches, adapters, etc. — most of which you never see. That's because network infrastructure is a common resource; so larger organizations like corporations, business alliances, or governments must build them before individuals can use the bandwidth. For example, you might be able to buy a PC with Intel's newest 45nm processor technology within a year, and realize the performance benefits immediately (for that 2 percent of time you're actually using the chip). But when's the last time your Internet connection got faster?

Here's another way to look at it. A relatively small number of users in top tier government labs and datacenters have access to 10 gigabits per second bandwidth today. It will take years before access to this technology trickles down to everyday individuals. On the other hand, the latest level of CPU (and GPU) technology is available to everyone almost simultaneously.

So what does this all have to do with high-end computing? Plenty. A lot of the same influences that apply to personal IT also apply to HPC, namely, entrenched software and cultural habits. The dominance of Windows and Linux/UNIX software and the HPC user experience with clusters creates an environment which favors local computing, while the lack of standards in distributed computing APIs and user interfaces holds back potential growth in grid and utility computing.

Microsoft, in particular, is beginning to exploit its software stack as leverage into the HPC market. And it's a big lever. Windows platforms are ubiquitous, even in Linux/UNIX computing environments. The company's Windows Compute Cluster Server 2003 product is targeted for the deskside cluster crowd, most of whom already use Windows on their desktop workstations.

Microsoft reports that a recent survey (which they sponsored) indicates that the oil and gas industry would favor more localized technical computing power, favoring deskside clusters over datacenter machines. This doesn't mean that oil & gas professionals are control freaks. I would guess that similar results would be obtained for all HPC vertical markets: financial, bio/life sciences, entertainment, and so on.

Intel and AMD feed into this too. With their focus on creating volume processors, these companies end up designing hardware that targets personal computing environments. Intel's Terascale Program and associated RMS software (which we spotlight in this issue) seem destined for personal computing rather than the enterprise.

As you might expect, Sun Microsystems has a different take on this. They observe that the demand for computing is growing faster than processor performance. This, they say, will favor massively scaled out, but centralized, computing infrastructure, a la Google. At the Sun Analyst Summit in February, Greg Papadopoulos, CTO, EVP of R&D, presented this vision of computing in a presentation called “Redshift: The Explosion of Massive-Scale Systems”.

The idea Papadopoulos put forth was that since computational demands are growing faster than Moore's Law, infrastructure needs to be expanded, not just upgraded. He used HPC as a perfect example of an application category that has insatiable computational demands, and which is only limited by available money to spend on it. The best way to address this type of demand is through massively scaled computing systems that use virtualization and other technologies to maximize efficiencies of power usage, utilization, security and predictability. The centralized power grid is the analogy.

Papadopoulos believes we're approaching a phase change in IT, where the PC-based model will disappear. And according to him “when that cross-over point happens, it's going to be redefining for the industry.”

He could be right. But the best way to predict the future is to invent it. At some point, I would think that some company like Sun or Google, which has a big stake in this type of computing model, would develop and sell (or give away!) a general-purpose thin client.

Until then, I'll probably be in the market for a Vista machine (sigh).

—–

As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at editor@hpcwire.com.