An Interview with ISC’09 Keynote Speaker Andy von Bechtolsheim

By Nicole Hemsoth

June 21, 2009

When 1,500 leading members of the world’s high performance computing community convene June 23-26 at the 2009 International Supercomputing Conference, The opening keynote address will be presented by Andreas “Andy” von Bechtolsheim, the legendary co-founder of Sun Microsystems and founder and Chief Development Officer of Arista Networks. Von Bechtolsheim will discuss “The Evolution of Interconnects for High Performance Computing.”

ISC, which will be held in Hamburg for the first time in the 24-year history of the conference, has a well-established reputation for presenting well-founded, precise and up-to-date information in an environment that encourages informal conversations and sharing of ideas. And of all the thought-provoking sessions scheduled for ISC’09, none are likely to spark more discussion than the keynote addresses.

In his presentation, von Bechtolsheim will discuss trends in the high performance computation market, including the challenge of building large fabrics and the role of InfiniBand and 10 Gigabit Ethernet. He will also look at how to address the challenges of building, integrating, and using petascale systems including system power and cooling, system stability, and scalablity. Finally, he will look at the impact of solid state memory for HPC deployments and how it can address data bandwidth within the system to deliver improved overall performance through a more balanced system architecture.

Andy Bechtolsheim, Arista Networks Co-Founder Von Bechtolsheim was a co-founder and Chief System Architect at Sun Microsystems, responsible for next generation server, storage, and network architectures. From 1995-96, he was CEO and President of Granite Systems, a Gigabit Ethernet Switching startup company he founded that Cisco acquired in September 1996. From 1996 to 2003, he was VP Engineering and later General Manager for the Gigabit Systems Business Unit at Cisco System that developed the Catalyst 4000/4500 Gigabit Switch family, the highest volume modular switching platform in the industry.

Von Bechtolsheim earned a M.S. in Computer Engineering from Carnegie Mellon University in 1976. He was a doctoral student in Computer Science and Electrical Engineering at Stanford University from 1977-82. He has been honored with a Fulbright scholarship, a German National Merit Foundation scholarship, the Stanford Entrepreneur Company of the year award, the Smithsonian Leadership Award for Innovation, and is a member of the National Academy of Engineering.

The following interview with von Bechtolsheim by Christoph Poeppe from “Spektrum der Wissenschaft” (the German sister publication of Scientific American) was translated by Jon Bashor and Heike Walther.

Spektrum der Wissenschaft: What drives a person, who was apparently meant to pursue a scientific career, to take a path that leads him to such exceptional commercial success? What went wrong?

Bechtolsheim: I don’t see any fundamental conflict between science and commercial success, at least not where I work — in Silicon Valley. All in all, though, I have always been much less interested in academic research and much more interested in how to build better products that drive a commercial success.

Spektrum der Wissenschaft: But didn’t you start out as a physicist?

Bechtolsheim: Not really. In 1974, I did win the German Science Fair in Physics building a device that could precisely measure flows using ultrasound, and in high school I took advanced classes in physics and bio- chemistry, because these were the most interesting classes that were offered. But I was always much more interested in computers and computer science, which is really an engineering discipline. There have been very few major breakthroughs in mathematics and theory in the last twenty-five years that affected the field of computer science. All the new advances that we have seen were really based on better engineering.

Innovation in the computer field is very different than innovating in a traditional industry such as chemistry. At the moment, “Green Energy” is a big focus for venture capitalists. But to make ethanol at a lower cost, you need an unbelievably large amount of investment capital to build new facility, and this is difficult to come by these days.

In information technology, many of the most successful new companies were started with very modest capital. For example Google, which has become the most successful search company, was financed with just 30 million dollars of venture capital.

And Google has been branching out to offer all kinds of new services and applications.

Spektrum der Wissenschaft: I’m really only familiar with Google as a search engine…

Bechtolsheim: Besides the Google search engine, there is also Google Maps and Google Apps and Google Talk and the YouTube video portal – the possibilities stretch out from there. The end user just needs a browser and an Internet connection to use all these services. The computer work is done inside Google’s gigantic data centers, where with clever engineering and large scale, Google has achieved enormous cost advantages compared to conventional data centers.

Spektrum der Wissenschaft: How so?

Bechtolsheim: Google has built a reliable system environment out of a large number of simple, low-cost servers. Google builds its datacenters in locations that have low-cost power and cooling, and it manages these data center with very few people. It is estimated that the cost per CPU hour in a Google datacenter is between one-fifth to one-tenth of a traditional enterprise data center.

Spektrum der Wissenschaft: What’s your personal connection to Google?

Bechtolsheim: My friend David Cheriton, who is a professor at Stanford, introduced me to Sergey Brin and Larry Page. Their idea to sort search results by relevance, which is calculated by the number of links between websites, convinced me right away. It does not matter what the content of a website is, the only thing that counts is how many and how many relevant websites are linked to this website. This approach is immune against tricks some sites use to artificially raise their hits, such as embedding the same word many, many times in a way that is invisible to the user.

And the business model of linking relevant search results to relevant sponsored links was a stroke of genius that had not occurred to anyone else.

Spektrum der Wissenschaft: In your new company, Arista, you are focused mainly on building network switches. What pushed you in this direction?

Bechtolsheim: All large web companies are building large data centers for what is now called “cloud computing.” This concept used to be called grid computing, computing clusters or server farms. There is extensive data transfer among the servers in these cloud compute clusters. The end result of this computational work, such as a list of search results, doesn’t contain much data, but to calculate the relevance of a website, the page rank, you have to look through large amounts of data.

The demand for bandwidth rises in proportion to the speed of the servers and the number of servers in such a cloud. With 10,000 servers that require 1 gigabit per second per server, the cloud network has to move 10 terabits per second. Of critical importance is that the switches allocate bandwidth fairly to all servers and connect them with very low latency.

Spektrum der Wissenschaft: Do you build your own switch silicon for your systems?

Bechtolsheim: No. In contrast to 10 years ago, today there are very good switch chips and network processors that there is no need to develop your own silicon, which is extremely expensive to do.

Spektrum der Wissenschaft: What do you bring to the table?

Bechtolsheim: We develop the network software. A switch needs to respond to a large number of protocols to operate correctly. We have developed a very modular and robust network operating system that we call EOS, which has separate processes for each task in the networking stack. If a process fails or gets updated, it does not affect the operation of the switch and the system continues without interruption. As a result our system is very stable. Further, EOS runs on top of a standard Linux kernel.

This means we can run any other program on the same switch, including customer specific solutions.

Spektrum der Wissenschaft: How many computers can one switch handle?

Bechtolsheim: Customers usually configure 20 to 40 computers per rack. Our rack-top switches have up to 48 ports, 40 of which connect to the computers in the rack and the rest connect to our core switch, which has hundreds of ports. This allows us to support very large clusters with 10,000s of servers.

The computers are so fast nowadays that in many cases the network bandwidth has become the limiting factor. With our switches we offer customer a great way to increase overall system performance.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

CMU’s Latest “Card Shark” – Libratus – is Beating the Poker Pros (Again)

January 20, 2017

It’s starting to look like Carnegie Mellon University has a gambling problem – can’t stay away from the poker table. Read more…

By John Russell

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Weekly Twitter Roundup (Jan. 19, 2017)

January 19, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale

January 19, 2017

France’s CEA and Japan’s RIKEN institute announced a multi-faceted five-year collaboration to advance HPC generally and prepare for exascale computing. Among the particulars are efforts to: build out the ARM ecosystem; work on code development and code sharing on the existing and future platforms; share expertise in specific application areas (material and seismic sciences for example); improve techniques for using numerical simulation with big data; and expand HPC workforce training. It seems to be a very full agenda. Read more…

By Nishi Katsuya and John Russell

HPE Extreme Performance Solutions

Remote Visualization: An Integral Technology for Upstream Oil & Gas

As the exploration and production (E&P) of natural resources evolves into an even more complex and vital task, visualization technology has become integral for the upstream oil and gas industry. Read more…

ARM Waving: Attention, Deployments, and Development

January 18, 2017

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s Riken announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December. Read more…

By John Russell

Women Coders from Russia, Italy, and Poland Top Study

January 17, 2017

According to a study posted on HackerRank today the best women coders as judged by performance on HackerRank challenges come from Russia, Italy, and Poland. Read more…

By John Russell

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

January 17, 2017

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur. Read more…

By Doug Black

Weekly Twitter Roundup (Jan. 12, 2017)

January 12, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale

January 19, 2017

France’s CEA and Japan’s RIKEN institute announced a multi-faceted five-year collaboration to advance HPC generally and prepare for exascale computing. Among the particulars are efforts to: build out the ARM ecosystem; work on code development and code sharing on the existing and future platforms; share expertise in specific application areas (material and seismic sciences for example); improve techniques for using numerical simulation with big data; and expand HPC workforce training. It seems to be a very full agenda. Read more…

By Nishi Katsuya and John Russell

ARM Waving: Attention, Deployments, and Development

January 18, 2017

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s Riken announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December. Read more…

By John Russell

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

January 17, 2017

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur. Read more…

By Doug Black

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

UberCloud Cites Progress in HPC Cloud Computing

January 10, 2017

200 HPC cloud experiments, 80 case studies, and a ton of hands-on experience gained, that’s the harvest of four years of UberCloud HPC Experiments. Read more…

By Wolfgang Gentzsch and Burak Yenier

A Conversation with Women in HPC Director Toni Collis

January 6, 2017

In this SC16 video interview, HPCwire Managing Editor Tiffany Trader sits down with Toni Collis, the director and founder of the Women in HPC (WHPC) network, to discuss the strides made since the organization’s debut in 2014. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

Leading Solution Providers

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

  • arrow
  • Click Here for More Headlines
  • arrow
Share This