FEATURES & COMMENTARY
San Diego, CALIF. — This is continuing a discussion about “Supercomputing Takes Yet Another Turn” (18939, 11.23.00). In a letter to the editor (18977, 12.01.00), Nigel Healy, takes exception to the title and suggests “New Uses for the Internet” instead. He also lists some challenges in the Internet Computing field, suggesting that IC may not be viable for true Supercomputing.
Although it does differ in significant ways from conventional Supercomputing, and there are some challenges, Internet Computing clearly does support many types of large-scale computing, including many high-performance computational science applications typically done on Supercomputers.
As [email protected] ( setiathome.ssl.berkeley.edu ) has shown, very large-scale computation can be accomplished in this way. Recently, [email protected] statistics stated that over a 24-hour period ending 12/1/00, their collection of computers had achieved over 28 TeraFLOPs/sec. Since they duplicate computations (as a security measure), the real throughput is half of that, which is still an extremely impressive 14 TeraFLOPs/sec. This is not just theoretical maximum speed, but real, sustained computation.
The top ranked computer on the Top500 list ( www.top500.org ) is the ASCI White SP Power3 ( www.llnl.gov/asci ), which is rated at 4.938 TeraFLOPs/sec. The Linpack benchmark used for this list measures near-peak performance and typical applications will run much slower (see http://www.top500.org/lists/linpack.html ). But even using the 4.938 figure, that means that [email protected] is delivering more than twice the FLOPs/sec rate of the world’s faster supercomputer (nearly three times). In my opinion, such a system should be considered a supercomputer. That it does so at a fraction of the cost makes it that much more attractive. For those applications that can run in the IC mode, the potential is very great indeed.
But I agree that the title “Supercomputing Takes Yet Another Turn” is somewhat misleading. Internet Computing is an excellent way to do computational science and other types of HPC, but is likely to coexist with conventional supercomputing for a long time, probably indefinitely. Many applications require very frequent, high-speed, low-latency communication between cooperating processors and will require conventional supercomputers for optimal performance. It is not so much that supercomputing is giving way to IC, but that IC is a new and growing type of HPC that supplements existing systems. It is another way to provide high-performance, high-quality, cost-effective computational capability.
The supplemental nature of IC is exemplified by the recent announcement of a major collaboration between Entropia and the PACI centers, NPACI and NCSA. These partners will develop and deploy a number of large-scale IC applications to supplement the conventional supercomputing done at the PACI centers (see “Entropia Creates Largest Computing Resource Ever Available to Academic Scientists,” http://www.entropia.com/press/release_11092000.asp ).
The likely benefit of IC to HPC and to society is tremendous. Already there are many projects in production today, including [email protected], FightAIDSatHome ( http://www.fightaidsathome.org , a collaboration between Entropia and the Olson laboratory at the Scripps Research Institute), and Stanford’s [email protected] ( http://www.stanford.edu/group/pandegroup/Cosm/ ).
I have worked in the Supercomputing field for over 20 years now and have been fascinated by the potential of IC applications since learning of [email protected] a year and a half ago. Last summer, I had the opportunity to participate in one such venture and joined Entropia. I believe that Entropia is well on its way to success, with the developing technology, talent, and vision to address the technical challenges and achieve its goals. We are building on three years of real-world experience (see GIMPS, the Great Internet Mersenne Prime Search, http://www.mersenne.org/prime.htm ).
I’ll comment briefly on the expressed concerns about Loadbalancing, Security, Robustness, and Economics.
Economics is rather interesting. The Entropia client makes use of computer cycles that would otherwise go to waste. This is commonly known as “cycle stealing” but “recycling” would be a better term since nothing is stolen. The Entropia client includes a screen-saver, but it does not require the screen saver to rescue cycles from the idle loop, in 1/100th of a second. It gets out of the way immediately when other uses of the machine are needed. Thus, the economics are that the computing power is essentially free; wasted computing and electrical power can be put to good use. And since it is generally a good idea to avoid powercycling a computer, work PCs and workstations often sit totally idle overnight and on the weekends. Home PCs also sit idle periodically.
Loadbalancing can be a concern for some applications but not all. A unit of work is distributed to a client, and it is returned when complete. There is a steady, massive stream of work units to be done. Applications that do have a need for loadbalancing can be accommodated by scheduling jobs on the appropriate subset of client hosts. This is an area of active development at Entropia, part of our efforts to ease the integration of a wide variety of applications into our IC system.
A significant portion of the Entropia infrastructure and development efforts address security through the use of encryption, authentication, and sandbox technologies. Of course, there is no such thing as perfect computer security, but an effective trade-off can be made. In many applications, data security is not a significant concern, or the individual pieces are unintelligible without the whole being known. Key results can be verified independently. In [email protected], for example, when the IC system locates potentially interesting portions in the radio telescope data stream, they are reexamined independently.
Robustness of the software is a challenge, due to wide variation in PC configurations and devices. But the client software doesn’t need to deal with every type of device itself and the whole network of clients has to be managed in a manner that can deal with client job failure or network disconnection from a variety of causes. Job retry and recovery is a major component of our infrastructure.
Another challenge, and another potential strength for the IC field, is that, so far, few people thought about how their computational problem might be solved in this unorthodox manner. Instead of trying to make very efficient use of expensive machines, one can make less efficient use of large numbers of inexpensive machines. Some Monte Carlo approaches that are too inefficient on supercomputers may now be viable. Results that are produced too slowly on a single PC (or even a supercomputer) have a completely different impact when they become part of a huge ensemble. ClimatePredition.com, for example, is planning to do millions of year-long runs on PCs. “…we aren’t proposing to parallelize a single model over many PCs, which would indeed be impractical, but to have everyone run their own independent climate model. This is feasible, because for a 50-year climate simulation (unlike a 3-day weather forecast), it doesn’t matter if the results aren’t in for a couple of years.” As more people think about large-scale problems in such innovative ways, additional and unexpected IC applications will be developed.
In short, the potential for Internet Computing is huge. There are technical challenges, but they are being addressed. Applications that can fit into the IC mode of computational science can tap into a resource that is orders of magnitude more capable than might otherwise be available.
In addition to all the other benefits, IC, by its nature, involves large numbers of the public directly in scientific research. The science is done not only by peer-review, but also by citizen choice, and an informed citizenry is a necessity. Such involvement and education of the public is sorely needed, particularly in the United States where the average citizen understands little about science and the workings of the natural world.
I hope you all will wish us luck (and download and run our client software today at www.entropia.com!) The need for large-scale computational power is essentially unlimited and IC can make significant contributions to HPC.
— Wayne Schroeder
The opinions expressed in this article are those of its author and not necessarily those of the publisher or staff of HPCwire.