PETAFLOPS IN 2009: AN INTERVIEW WITH STEVE WALLACH

November 7, 2000

by Alan Beck, editor in chief LIVEwire

Dallas, Texas — SC2000’s keynote address was given by Steven J. Wallach. Wallach co-founded Convex Computer Corporation, along with Robert J. Paluck, former chairman and CEO, in 1982 and was the chief designer of the Convex C-Series, the world’s first affordable supercomputer, as well as the Exemplar Scalable Parallel Processor (SPP), HP/Convex.

Wallach is currently an advisor to CenterPoint ( http://www.centerpointvp.com ) Venture Partners, Dallas, Texas and Vice President of Technology of Chiaro Networks ( http://www.chiaro.com ), Richardson, Texas. He may be best known outside HPCN circles as the Data General engineer who was the principal architect of the 32-bit Eclipse MV superminicomputer series as described by Pulitzer Prize winner Tracy Kidder in The Soul of A New Machine.

Wallach holds 33 patents in various areas of computer design and held a joint appointment in the Graduate School of Management and Brown School of Engineering, Computer Science, Rice University for the 1998 and 1999 academic years. He is a member of the PITAC (Presidential Advisory Board on High Performance Computing, Communications, and Networking) and the advisory committee for the Hybrid Technology MultiThreaded Architecture (HTMT) a US DOD funded project to develop the concepts for a PETAFLOP computer. He is also a member of the National Academy of Engineering.

HPCwire interviewed Wallach to explore some of his current perspectives on the state of high performance computing:

HPCwire: Your SC2000 keynote is entitled “Petaflops in the Year 2009”. Is this realistic? What are the principal challenges HPC must meet to effect this goal?

WALLACH: This goal is more than realistic. One can make an argument that a petaflop computer system exists today, it is call the Web. It has been well documented how 1000’s of computers, distributed throughout the world, have been used to solve embarrassingly parallel applications. If we can apply 1,000,000 networked pc/workstations, we get a petaflop computer. Entropia is a example of an effort that is attempting to do this.

What my keynote address discusses is how to make a petaflop computer that is more general purpose (an oxymoron perhaps?), and that is located on one location (that also has to be re-examined). Much of the technology that is used and developed for GRID computing today will be used for the petaflop computer that I will describe.

The principal challenges have not really changed much in the last 10 years. We will need advances in: software; including compilers, OS, development environments, and the interconnect/memory system. Every time a new generation of processor is developed, with its own unique internal architecture, we stress the existing development and algorithmic environment.

We must also rethink the way we do storage. Petaflops of computing implies Petabytes of storage. I believe that architectures developed for web based and commercial storage systems will become the leading edge architectures for technical computing.

HPCwire: After pioneering supercomputing technology, you are now closely involved with both CenterPoint Venture Partners and Chiaro Networks. What do you hope to accomplish through these corporate efforts?

WALLACH: Well, I guess I am still an engineer at heart. I like to make things happen and like to ship product. The more disruptive the technology the happier I am. Today, that generally means doing things in a startup. So whether that means helping companies get started or directly getting involved in day to day operations. In fact, one can make an argument, that major companies throughout the world relay on startups for their new technology. As near as I can tell, all major technology companies have a venture capital group. These internal venture groups look for companies that have technologies that are strategic to the corporate mission. Intel Capital is perhaps the best example of these phenomena.

I recently gave some testimony before a US Senate Committee. This was to support upcoming NSF appropriations. One of the speakers, from the NSF, referred to one of their missions as “The Venture Capitalist of the first degree”. Meaning that government “invests” in research without a consideration of a financial return on investment, but a research return. I agree with this perspective.

When doing due diligence on companies seeking funding, it is fun to perform design reviews and/or make suggestions for improvement. Too many potential founders, try to impress venture capitalists with spread sheets, etc; in my book a spreadsheet is a random number generator. Also, with the CenterPoint and Sevin-Rosen funds, we have a keiretsu type of organization. In many cases, startups in the family help each other, when and where appropriate.

Personally, I am on the technical board of advisors of two startup companies; Chorum Technology (optical components) and Scale8 (Petabyte storage systems), and help out with some others.

HPCwire: As a member of the Presidential Advisory Board on High Performance Computing, Communications, and Networking you are in a unique position to observe the impact of policy and politics on HPC. How would you characterize your experiences in this arena? Are there frustrations and/or satisfactions that you find particularly noteworthy?

WALLACH: There are both; frustrations and satisfactions. The frustrations are the level of politics and what has to be “politically correct”. I will not go further, but Washington is Washington and politics is politics.

The satisfactions more then outweigh the frustrations. Helping our country by helping the members of the various branches of the government understand the importance of high performance computing. The one major recommendation of PITAC was that the US totally under spends for long-term basic research. Today; most of the research is for applied research. Long term basic research funding is needed to help solve the problems and develop the technologies that are needed 10 to 20 years from now. That is difficult to convince someone, who, perhaps only has a 4 to 6 year view. But we must increase funding levels for long-term basic research.

There are two aspects of high performance computing that are very important. One is for national security reasons. The ASCI program is a prime example of this. The other is the trickle down effect that high performance computing has on more commonplace applications. The extensive use of clusters and SMP’s for various web-based services would not have been made possible without the technology that was developed for high performance computing. Unfortunately this is not well understood nor appreciated.

HPCwire: When HPCwire interviewed you in 1997, you noted that knotty programming problems, often focused on algorithms and legacy code, were responsible for stymieing much Progress in HPC. Has this changed? How? Have architectures like Tera’s MTA changed the picture significantly?

WALLACH: No, not really. Legacy codes still prevail in the technical fields. The newest codes are web centric and are generally written in java. But they are rarely numerically intensive. Every time a new processor or system architecture is developed, the code generator and machine dependent optimizers have to be redone. And in many cases, application tuning is needed. I am convinced that this is becoming, if not already, an art and not a science. At Convex, I use to say that benchmarking and tuning a system is really a benchmarking and tuning test of your analyst.

Tera’s MTA is a significant advance in computer architecture. But to fully utilize its capabilities you still need to tune your algorithms and your code.

HPCwire: With the new century, a new generation of computer scientists is taking the reins of HPC development. What advice would you like to give them?

WALLACH: Try to start out with a clean sheet of paper. Also recognize that the biggest market for high performance computer and scalable parallel processors are the web centric servers. Applications like database, web hosting, storage for petabytes of data and media files, will dominate. And then incorporate numerical intensive features. The new generation also has to be more network and grid centric.

From a language perspective, we will continue to evolve FORTRAN, C, and JAVA. It appears that every 10 to 15 years or so, a new language is accepted, not by industry or government edict, but by a community ground swell. That is what happened with C and JAVA. So someone in the next 10 years or so will probably develop a new paradigm for software development that will be accepted. I have no idea what this will look like, but it will surely happen.

============================================================

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SODALITE: Towards Automated Optimization of HPC Application Deployment

May 29, 2020

Developing and deploying applications across heterogeneous infrastructures like HPC or Cloud with diverse hardware is a complex problem. Enabling developers to describe the application deployment and optimising runtime p Read more…

By the SODALITE Team

What’s New in HPC Research: Astronomy, Weather, Security & More

May 29, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

DARPA Looks to Automate Secure Silicon Designs

May 28, 2020

The U.S. military is ramping up efforts to secure semiconductors and its electronics supply chain by embedding defenses during the chip design phase. The automation effort also addresses the high cost and complexity of s Read more…

By George Leopold

COVID-19 HPC Consortium Expands to Europe, Reports on Research Projects

May 28, 2020

The COVID-19 HPC Consortium, a public-private effort delivering free access to HPC processing for scientists pursuing coronavirus research – some utilizing AI-based techniques – has expanded to more than 56 research Read more…

By Doug Black

What’s New in Computing vs. COVID-19: IceCube, TACC, Watson & More

May 28, 2020

Supercomputing, big data and artificial intelligence are crucial tools in the fight against the coronavirus pandemic. Around the world, researchers, corporations and governments are urgently devoting their computing reso Read more…

By Oliver Peckham

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

Supercomputer Simulations Explain the Asteroid that Killed the Dinosaurs

May 28, 2020

The supercomputing community has cataclysms on the mind. Hot on the heels of supercomputer-powered research delving into the fate of the neanderthals, a team of researchers used supercomputers at the DiRAC (Distributed R Read more…

By Oliver Peckham

COVID-19 HPC Consortium Expands to Europe, Reports on Research Projects

May 28, 2020

The COVID-19 HPC Consortium, a public-private effort delivering free access to HPC processing for scientists pursuing coronavirus research – some utilizing AI Read more…

By Doug Black

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

IBM Boosts Deep Learning Accuracy on Memristive Chips

May 27, 2020

IBM researchers have taken another step towards making in-memory computing based on phase change (PCM) memory devices a reality. Papers in Nature and Frontiers Read more…

By John Russell

Hats Over Hearts: Remembering Rich Brueckner

May 26, 2020

HPCwire and all of the Tabor Communications family are saddened by last week’s passing of Rich Brueckner. He was the ever-optimistic man in the Red Hat presiding over the InsideHPC media portfolio for the past decade and a constant presence at HPC’s most important events. Read more…

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia Read more…

By Doug Black

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to re Read more…

By John Russell

AMD Epyc Rome Picked for New Nvidia DGX, but HGX Preserves Intel Option

May 19, 2020

AMD continues to make inroads into the datacenter with its second-generation Epyc "Rome" processor, which last week scored a win with Nvidia's announcement that Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Contributors

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This