by Alan Beck, editor in chief LIVEwire
Dallas, Texas — SC2000’s keynote address was given by Steven J. Wallach. Wallach co-founded Convex Computer Corporation, along with Robert J. Paluck, former chairman and CEO, in 1982 and was the chief designer of the Convex C-Series, the world’s first affordable supercomputer, as well as the Exemplar Scalable Parallel Processor (SPP), HP/Convex.
Wallach is currently an advisor to CenterPoint ( http://www.centerpointvp.com ) Venture Partners, Dallas, Texas and Vice President of Technology of Chiaro Networks ( http://www.chiaro.com ), Richardson, Texas. He may be best known outside HPCN circles as the Data General engineer who was the principal architect of the 32-bit Eclipse MV superminicomputer series as described by Pulitzer Prize winner Tracy Kidder in The Soul of A New Machine.
Wallach holds 33 patents in various areas of computer design and held a joint appointment in the Graduate School of Management and Brown School of Engineering, Computer Science, Rice University for the 1998 and 1999 academic years. He is a member of the PITAC (Presidential Advisory Board on High Performance Computing, Communications, and Networking) and the advisory committee for the Hybrid Technology MultiThreaded Architecture (HTMT) a US DOD funded project to develop the concepts for a PETAFLOP computer. He is also a member of the National Academy of Engineering.
HPCwire interviewed Wallach to explore some of his current perspectives on the state of high performance computing:
HPCwire: Your SC2000 keynote is entitled “Petaflops in the Year 2009”. Is this realistic? What are the principal challenges HPC must meet to effect this goal?
WALLACH: This goal is more than realistic. One can make an argument that a petaflop computer system exists today, it is call the Web. It has been well documented how 1000’s of computers, distributed throughout the world, have been used to solve embarrassingly parallel applications. If we can apply 1,000,000 networked pc/workstations, we get a petaflop computer. Entropia is a example of an effort that is attempting to do this.
What my keynote address discusses is how to make a petaflop computer that is more general purpose (an oxymoron perhaps?), and that is located on one location (that also has to be re-examined). Much of the technology that is used and developed for GRID computing today will be used for the petaflop computer that I will describe.
The principal challenges have not really changed much in the last 10 years. We will need advances in: software; including compilers, OS, development environments, and the interconnect/memory system. Every time a new generation of processor is developed, with its own unique internal architecture, we stress the existing development and algorithmic environment.
We must also rethink the way we do storage. Petaflops of computing implies Petabytes of storage. I believe that architectures developed for web based and commercial storage systems will become the leading edge architectures for technical computing.
HPCwire: After pioneering supercomputing technology, you are now closely involved with both CenterPoint Venture Partners and Chiaro Networks. What do you hope to accomplish through these corporate efforts?
WALLACH: Well, I guess I am still an engineer at heart. I like to make things happen and like to ship product. The more disruptive the technology the happier I am. Today, that generally means doing things in a startup. So whether that means helping companies get started or directly getting involved in day to day operations. In fact, one can make an argument, that major companies throughout the world relay on startups for their new technology. As near as I can tell, all major technology companies have a venture capital group. These internal venture groups look for companies that have technologies that are strategic to the corporate mission. Intel Capital is perhaps the best example of these phenomena.
I recently gave some testimony before a US Senate Committee. This was to support upcoming NSF appropriations. One of the speakers, from the NSF, referred to one of their missions as “The Venture Capitalist of the first degree”. Meaning that government “invests” in research without a consideration of a financial return on investment, but a research return. I agree with this perspective.
When doing due diligence on companies seeking funding, it is fun to perform design reviews and/or make suggestions for improvement. Too many potential founders, try to impress venture capitalists with spread sheets, etc; in my book a spreadsheet is a random number generator. Also, with the CenterPoint and Sevin-Rosen funds, we have a keiretsu type of organization. In many cases, startups in the family help each other, when and where appropriate.
Personally, I am on the technical board of advisors of two startup companies; Chorum Technology (optical components) and Scale8 (Petabyte storage systems), and help out with some others.
HPCwire: As a member of the Presidential Advisory Board on High Performance Computing, Communications, and Networking you are in a unique position to observe the impact of policy and politics on HPC. How would you characterize your experiences in this arena? Are there frustrations and/or satisfactions that you find particularly noteworthy?
WALLACH: There are both; frustrations and satisfactions. The frustrations are the level of politics and what has to be “politically correct”. I will not go further, but Washington is Washington and politics is politics.
The satisfactions more then outweigh the frustrations. Helping our country by helping the members of the various branches of the government understand the importance of high performance computing. The one major recommendation of PITAC was that the US totally under spends for long-term basic research. Today; most of the research is for applied research. Long term basic research funding is needed to help solve the problems and develop the technologies that are needed 10 to 20 years from now. That is difficult to convince someone, who, perhaps only has a 4 to 6 year view. But we must increase funding levels for long-term basic research.
There are two aspects of high performance computing that are very important. One is for national security reasons. The ASCI program is a prime example of this. The other is the trickle down effect that high performance computing has on more commonplace applications. The extensive use of clusters and SMP’s for various web-based services would not have been made possible without the technology that was developed for high performance computing. Unfortunately this is not well understood nor appreciated.
HPCwire: When HPCwire interviewed you in 1997, you noted that knotty programming problems, often focused on algorithms and legacy code, were responsible for stymieing much Progress in HPC. Has this changed? How? Have architectures like Tera’s MTA changed the picture significantly?
WALLACH: No, not really. Legacy codes still prevail in the technical fields. The newest codes are web centric and are generally written in java. But they are rarely numerically intensive. Every time a new processor or system architecture is developed, the code generator and machine dependent optimizers have to be redone. And in many cases, application tuning is needed. I am convinced that this is becoming, if not already, an art and not a science. At Convex, I use to say that benchmarking and tuning a system is really a benchmarking and tuning test of your analyst.
Tera’s MTA is a significant advance in computer architecture. But to fully utilize its capabilities you still need to tune your algorithms and your code.
HPCwire: With the new century, a new generation of computer scientists is taking the reins of HPC development. What advice would you like to give them?
WALLACH: Try to start out with a clean sheet of paper. Also recognize that the biggest market for high performance computer and scalable parallel processors are the web centric servers. Applications like database, web hosting, storage for petabytes of data and media files, will dominate. And then incorporate numerical intensive features. The new generation also has to be more network and grid centric.
From a language perspective, we will continue to evolve FORTRAN, C, and JAVA. It appears that every 10 to 15 years or so, a new language is accepted, not by industry or government edict, but by a community ground swell. That is what happened with C and JAVA. So someone in the next 10 years or so will probably develop a new paradigm for software development that will be accepted. I have no idea what this will look like, but it will surely happen.