An Interview with ISC’09 Keynote Speaker Andy von Bechtolsheim

By Nicole Hemsoth

June 21, 2009

When 1,500 leading members of the world’s high performance computing community convene June 23-26 at the 2009 International Supercomputing Conference, The opening keynote address will be presented by Andreas “Andy” von Bechtolsheim, the legendary co-founder of Sun Microsystems and founder and Chief Development Officer of Arista Networks. Von Bechtolsheim will discuss “The Evolution of Interconnects for High Performance Computing.”

ISC, which will be held in Hamburg for the first time in the 24-year history of the conference, has a well-established reputation for presenting well-founded, precise and up-to-date information in an environment that encourages informal conversations and sharing of ideas. And of all the thought-provoking sessions scheduled for ISC’09, none are likely to spark more discussion than the keynote addresses.

In his presentation, von Bechtolsheim will discuss trends in the high performance computation market, including the challenge of building large fabrics and the role of InfiniBand and 10 Gigabit Ethernet. He will also look at how to address the challenges of building, integrating, and using petascale systems including system power and cooling, system stability, and scalablity. Finally, he will look at the impact of solid state memory for HPC deployments and how it can address data bandwidth within the system to deliver improved overall performance through a more balanced system architecture.

Andy Bechtolsheim, Arista Networks Co-Founder Von Bechtolsheim was a co-founder and Chief System Architect at Sun Microsystems, responsible for next generation server, storage, and network architectures. From 1995-96, he was CEO and President of Granite Systems, a Gigabit Ethernet Switching startup company he founded that Cisco acquired in September 1996. From 1996 to 2003, he was VP Engineering and later General Manager for the Gigabit Systems Business Unit at Cisco System that developed the Catalyst 4000/4500 Gigabit Switch family, the highest volume modular switching platform in the industry.

Von Bechtolsheim earned a M.S. in Computer Engineering from Carnegie Mellon University in 1976. He was a doctoral student in Computer Science and Electrical Engineering at Stanford University from 1977-82. He has been honored with a Fulbright scholarship, a German National Merit Foundation scholarship, the Stanford Entrepreneur Company of the year award, the Smithsonian Leadership Award for Innovation, and is a member of the National Academy of Engineering.

The following interview with von Bechtolsheim by Christoph Poeppe from “Spektrum der Wissenschaft” (the German sister publication of Scientific American) was translated by Jon Bashor and Heike Walther.

Spektrum der Wissenschaft: What drives a person, who was apparently meant to pursue a scientific career, to take a path that leads him to such exceptional commercial success? What went wrong?

Bechtolsheim: I don’t see any fundamental conflict between science and commercial success, at least not where I work — in Silicon Valley. All in all, though, I have always been much less interested in academic research and much more interested in how to build better products that drive a commercial success.

Spektrum der Wissenschaft: But didn’t you start out as a physicist?

Bechtolsheim: Not really. In 1974, I did win the German Science Fair in Physics building a device that could precisely measure flows using ultrasound, and in high school I took advanced classes in physics and bio- chemistry, because these were the most interesting classes that were offered. But I was always much more interested in computers and computer science, which is really an engineering discipline. There have been very few major breakthroughs in mathematics and theory in the last twenty-five years that affected the field of computer science. All the new advances that we have seen were really based on better engineering.

Innovation in the computer field is very different than innovating in a traditional industry such as chemistry. At the moment, “Green Energy” is a big focus for venture capitalists. But to make ethanol at a lower cost, you need an unbelievably large amount of investment capital to build new facility, and this is difficult to come by these days.

In information technology, many of the most successful new companies were started with very modest capital. For example Google, which has become the most successful search company, was financed with just 30 million dollars of venture capital.

And Google has been branching out to offer all kinds of new services and applications.

Spektrum der Wissenschaft: I’m really only familiar with Google as a search engine…

Bechtolsheim: Besides the Google search engine, there is also Google Maps and Google Apps and Google Talk and the YouTube video portal – the possibilities stretch out from there. The end user just needs a browser and an Internet connection to use all these services. The computer work is done inside Google’s gigantic data centers, where with clever engineering and large scale, Google has achieved enormous cost advantages compared to conventional data centers.

Spektrum der Wissenschaft: How so?

Bechtolsheim: Google has built a reliable system environment out of a large number of simple, low-cost servers. Google builds its datacenters in locations that have low-cost power and cooling, and it manages these data center with very few people. It is estimated that the cost per CPU hour in a Google datacenter is between one-fifth to one-tenth of a traditional enterprise data center.

Spektrum der Wissenschaft: What’s your personal connection to Google?

Bechtolsheim: My friend David Cheriton, who is a professor at Stanford, introduced me to Sergey Brin and Larry Page. Their idea to sort search results by relevance, which is calculated by the number of links between websites, convinced me right away. It does not matter what the content of a website is, the only thing that counts is how many and how many relevant websites are linked to this website. This approach is immune against tricks some sites use to artificially raise their hits, such as embedding the same word many, many times in a way that is invisible to the user.

And the business model of linking relevant search results to relevant sponsored links was a stroke of genius that had not occurred to anyone else.

Spektrum der Wissenschaft: In your new company, Arista, you are focused mainly on building network switches. What pushed you in this direction?

Bechtolsheim: All large web companies are building large data centers for what is now called “cloud computing.” This concept used to be called grid computing, computing clusters or server farms. There is extensive data transfer among the servers in these cloud compute clusters. The end result of this computational work, such as a list of search results, doesn’t contain much data, but to calculate the relevance of a website, the page rank, you have to look through large amounts of data.

The demand for bandwidth rises in proportion to the speed of the servers and the number of servers in such a cloud. With 10,000 servers that require 1 gigabit per second per server, the cloud network has to move 10 terabits per second. Of critical importance is that the switches allocate bandwidth fairly to all servers and connect them with very low latency.

Spektrum der Wissenschaft: Do you build your own switch silicon for your systems?

Bechtolsheim: No. In contrast to 10 years ago, today there are very good switch chips and network processors that there is no need to develop your own silicon, which is extremely expensive to do.

Spektrum der Wissenschaft: What do you bring to the table?

Bechtolsheim: We develop the network software. A switch needs to respond to a large number of protocols to operate correctly. We have developed a very modular and robust network operating system that we call EOS, which has separate processes for each task in the networking stack. If a process fails or gets updated, it does not affect the operation of the switch and the system continues without interruption. As a result our system is very stable. Further, EOS runs on top of a standard Linux kernel.

This means we can run any other program on the same switch, including customer specific solutions.

Spektrum der Wissenschaft: How many computers can one switch handle?

Bechtolsheim: Customers usually configure 20 to 40 computers per rack. Our rack-top switches have up to 48 ports, 40 of which connect to the computers in the rack and the rest connect to our core switch, which has hundreds of ports. This allows us to support very large clusters with 10,000s of servers.

The computers are so fast nowadays that in many cases the network bandwidth has become the limiting factor. With our switches we offer customer a great way to increase overall system performance.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

New Exascale System for Earth Simulation Introduced

April 23, 2018

After four years of development, the Energy Exascale Earth System Model (E3SM) will be unveiled today and released to the broader scientific community this month. The E3SM project is supported by the Department of Energy Read more…

By Staff

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

HPE Extreme Performance Solutions

Hybrid HPC is Speeding Time to Insight and Revolutionizing Medicine

High performance computing (HPC) is a key driver of success in many verticals today, and health and life science industries are extensively leveraging these capabilities. Read more…

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’s introduction of an ARM-based system (XC-50) last November. Read more…

By John Russell

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Leading Solution Providers

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This