GTC 2019: Chief Scientist Bill Dally Provides Glimpse into Nvidia Research Engine

By John Russell

March 22, 2019

Amid the frenzy of GTC this week – Nvidia’s annual conference showcasing all things GPU (and now AI) – William Dally, chief scientist and SVP of research, provided a brief but insightful portrait of Nvidia’s research organization. It’s perhaps not gigantic by large company standards, roughly 175 full-time researchers worldwide, but still sizable and quite impactful. At GTC, the exhibit hall was packed as usual with sparkly new products in various stages of readiness. It’s good to remember that many of these products were ushered into existence or somehow enabled by earlier work from Dally’s organization.

“We have had many successes, only a small number of them are listed here (in his presentation), and in my view what we really do is invent the future of Nvidia,” said Dally during his press briefing.

William Dally, Nvidia chief scientist and head of research

Nvidia must agree and politely declined to share Dally’s slides afterward. Perhaps a little corporate wariness is warranted. No matter – a few phone pics will do. In his 20-minute presentation, Dally hardly gushed secrets but did a nice job of laying out Nvidia’s research philosophy, broad organization, and even discussed a few of its current priorities. It’s probably not a surprise that optical interconnect is one pressing challenge being tackled and that work is in progress on “something that can go to 2 terabits per second per millimeter off the chip edge at 2 picojoules per bit.” More on that project later.

Presented here are most of Dally’s comments (lightly edited). They comprise an overview of Nvidia’s approach to thinking about and setting up the research function in a technology-driven company. Some of the material will be familiar; some may surprise you. Before jumping in it’s worth noting that Dally is well-qualified for the job. He was recruited from Stanford University in 2009 where he was chairman of the computer science department. He is a member of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Fellow of the IEEE and the ACM, and received the 2010 Eckert-Mauchly Award. There’s short bio at the end of the article.

  1. Philosophy – What is Research’s Role at Nvidia?

To give you an idea of what we do I’ll give you our philosophy. Our goal is to stay ahead of most of Nvidia and try to do things that can’t be done in the product groups but will make a difference for the product groups. When I was talked into leaving the academic world in 2008 by Jensen and starting Nvidia research in its current incarnation, I spent time surveying a lot of other industrial research labs and found the most of them do great research or have huge impact on the company, but almost none of them did both. The ones that did great research tended to publish lots of papers but were completely disconnected from their company product groups. Others wound up sort of being consultants for the product groups and doing development work but not really doing much research. So I set as a goal for Nvidia research to walk this very narrow line between these two chasms, either of which will swallow you, to try to do great research and make a difference for the company.

Our philosophy on how to do research is to maximize what we learn for the minimum amount of effort put in. So in jumping into a project we basically look at the risk reward ratio, what would be the payout if we succeed and how much effort is it going to take to do this? We try to do experiments that are cheap, [require] very little effort but if successful will have a huge impact.

Another thing we do is we involve the product groups at the beginning of research projects rather than at the end and this has two really valuable consequences. One is that by involving them at the beginning they get sense of ownership in it. When we get done we’re not just popping up with something that they have never seen before but it is something that they have been sort of a god parent to from day one incubating the technology along. Probably more important though is we wind up, by getting them involved in the beginning, solving those real problems not some artificial academic imagined problem that we pose for ourselves. We wind up having the technology much easier for them to adopt.

  1. Organization – Nvidia Tackles Supply and Demand

Roughly we organize Nvidia research into two sides, a supply side that supplies technology to make GPUs better. So we have a group that does circuits, a group that does design methodologies for VLSI, architecture for GPUs, networks, and programming systems. Programming systems really sort of spans both sides. [The other side is] the demand side of Nvidia research, which is a part of the research lab that drives demand for GPUs. All of our work in AI – in perception of learning, applied deep learning, algorithms group in our Toronto and Tel Aviv AI labs – all drive demand by creating new algorithms for deep learning.

We also have three different graphics groups that drive demand in graphics. One of our goals is to continue to raise the bar of what good graphics is because if it ever stays stationary we would get eaten from below by commodity graphics. We recently opened a robotics lab. Our goal is to basically develop the techniques that will make robots of the future work closely with humans and be our partners and [that] Nvidia will be powering the brains for these robots. We have a lab that is looking into that.

The question mark down here (see slide) is our moonshot projects. So often we will basically pull people out of these different groups and kick off a moonshot. We had [one] a number of years ago to do a research GPU called Einstein and Einstein morphed into Volta and that wound up being a great success. Then we had one a few years after that were we wanted to make real time ray tracing a reality. We pulled people out of the graphics groups and architecture groups and kicked off a project that we basically internally called the TTU for the tree traversal unit and it became the RT cores in Turing. So we have been able to have a number of very successful integration projects across these different groups.

We are geographically diverse with many locations in the U.S. [and] Europe. We just opened a lab in Israel. I’d very much like to start a lab in Asia and it really requires finding the right leader to start the lab. We tend to build labs around leaders rather than picking a geography and then trying to something there.

  1. Engaging the Community – Publishing Ensures Quality; Open Sourcing Mobilizes Community Development

We publish pretty openly. Our goal is to engage the outside community and publishing serves as number of functions [such as] quality control. One of the things I have observed is the research labs that don’t publish quickly wind up doing mediocre research because the scrutiny of peer review, while harsh at times, really is a great quality control measure. If your work is not good enough to get into a top tier conference like NeurIPS, ICLR in AI or ISCA in architecture, or SIGGRAPH in graphics, then it’s not good. And you have to be honest with yourself about that.

In addition to publishing we file a number of patents building intellectual property for the company. We release a lot of open source software products and this is in many ways a higher impact thing than a publication because people immediately grab this open source software. Much of the GAN (generative adversarial network) work we’ve done with progressive GANs, by open sourcing it people immediately grab it and build on it and start doing really interesting work. [It’s] a way of having that community feed itself and it’s a way of making progress very rapidly. A small listing of some of the ore recent papers we’ve published in leading venues are here.

  1. Successes – So How’s It Working?

We’ve had a lot of really big technology transfer successes to the company and I have just listed a few of them here. Almost all of the work that Nvidia does in ray tracing started in Nvidia research. This includes our optics ray tracing product which is sort of the core of our professional graphics. That started as a project when Steve Parker, who is now our general manager of professional graphics, was a research director reporting to me. After it became a successful research project we basically moved Steve and his whole team into our content organization and turned it into product.

Then, as I said, we had a moonshot that developed that became the RT core in Turing and actually a lot of the algorithmic things that underlie that. Our very fast algorithms, [the] BVH trees that are important in sampling to decide what the right directions are to cast rays, all started out as projects in Nvidia research. DGX-2 is based on a switch we developed, the NVSwitch. That started as a project in Nvidia research. We had a vision of building what are essentially large virtual GPUs, a bunch of GPUs all sharing their memories with very fast bandwidth between them, and we were building a prototype of this in research based on FPGAs. The product group got wind of it and basically grabbed it out of our hands before we were ready to let it go. But that’s an example of successful development and transfer.

CuDNN [was developed] back when Bryan Catanzaro (vice president applied deep learning) was in our programming systems research group. I started a project, a joint project with Andrew Ng (Baidu) and recruited Bryan to be the Nvidia representative. The software that came out of that really sort of launched Nvidia into deep learning. Then a bunch of applications in deep learning, image inpainting, noise-to-noise denoising, and progressive GAN which really was the first GAN to crack the problem of producing a good high resolution images but trying not to learn everything at once but training the GAN progressively, starting with low resolution 4×4 images and slowly working your way up to 1k or 4K images. Here’s a more complete list (see slide). I won’t go into it because I want to leave time for the interesting stuff later.

  1. FutureScan – What’s on the Slab in the Lab

I picked three projects from other parts of Nvidia research to give you a flavor for the breadth of what we do. So one project we just kicked off. This is a collaboration with Columbia University and SUNY Poly to build a photonic version of NVLink and NVSwitch. Very often we gauge what research we do by trying to find gaps by projecting our [current] technologies forward and looking to where we are going to come up short. One place we are going to come up short is in off-chip bandwidth, which is constrained both by bits per second per millimeter of chip edge, how many bits you can get on and off the chip given a certain parameter, and energy, picojoules per bit, getting that off the chip. So electrical signaling is pretty much on its last legs. We are going to be revving future versions of NVLink to 50 gigabits per second, 100 gigabits per second, per pin, but then we’re kind of out of gas. What can we do beyond that?

What we do is we go to optics and our plan is to produce something that can go to 2 terabits per second per millimeter off the chip edge at 2 picojoules per bit which is about an order of magnitude better than what we do today on both of those dimensions. The way we are able to do this is by producing a comb laser source. Basically it is a bunch of different frequency tones that we then can modulate each tone so we are able to put about 400 gigabits per second on a single fiber by having that broken up into 32 different wavelengths we can individual turn on and off. We can then connect up a DGX-like box with switches that have fibers coming in and out of them and GPUs that have fibers coming in and out of them and build very large DGX systems with very high bandwidth and very low power.

We are also experimenting with scalable ways of doing deep learning. This also demonstrates a lot of work we do in building research prototypes. We basically recently taped out and evaluated what we call RCA 18 – research chip 2018 – which is a deep learning accelerator that can be scaled from a very small size, [from] a single one of these PEs (processing elements) up to 16 PEs on a small die. We have integrated 36 die on a single organic substrate and the advantage here is it demonstrated a lot of technologies one of which is very efficient signaling from die to die on an organic substrate, [and] one of which is a scalable deep learning architecture. This has an energy per op, let’s see if I can get the numbers right, I believe of about 100 femtojoules per op doing deep learning inferences so it’s actually quite a bit better than most other deep learning inference engines that are currently available and it is efficient in a batch size of one. We can have very low latency inference as well. The technology for signaling between the dies is something called ground reference signaling and that’s probably about the best you can do electrically before we have to jump to the optical thing I showed you previously.

One thing I am very excited about is it’s not so much a project as a new lab we started.  We opened a lab in Seattle, a picture of the building down there on the left and the beautiful view out the window and the café right above it, to basically invent the future of robotics (see slide). Robots today are basically very accurate positioning machines. They tend to operate [an] open loop. You know they move a spot welder or spray gun to a preprogrammed position so that that part you are operating on is where it is supposed to be. You’ll get very repeatable welds or painting or whatever it is you’ve programmed the robots to do but that’s not the future. That’s the past.

The future of robotics is robots that can interact with their environment and the thing that makes this future possible is deep learning. By building perception systems based on deep learning we can build robots that can react to things not being where they are supposed to be. They estimate their pose and plan their paths accordingly. We can have them work with humans avoiding injuring the humans in the process. So our view is that the future of robotics are robots interacting with their environment we are going to invent a lot of that future.

By using the kitchen [as an example], it’s hard to see most of it here, but this is a little Segway base with the robot arm on it is operating in a kitchen working along side a human to carry out tasks such as preparing a meal. If you think about it that’s a hard thing for a robot. You are dealing with awkward shapes. You are dealing with people who move things around in odd ways. So it is really stressing the capabilities of what robots can do. A lot of people have done interesting demos trying to use reinforcement learning training robots end-to-end. That doesn’t work in such a complex environment.

We are actually having to go back and look at sort a lot of classic robotics tasks like path finding and we recently came up with a way of doing path finding using Riemannian Motion Policies (RMP) policies. It’s able to better deal with an unknown environment and maneuvering the robot arm to avoid striking things. To do pose estimation, we use neural networks to estimate say if a box of noodles you want to cook on the counter, to estimate the pose of that box. We do that by estimating the pose, rendering an image of what that box will look like in that pose, comparing it to the real image, and iterating that to refine the pose estimate. It’s really very accurate and by putting those pose estimates together we’ve been able to have these robots carry out very interesting tasks in our kitchen.

  1. Wrap-up – Pick Good Projects; Don’t Waste Resources; Make a Productive Impact

To sort of summarize, Nvidia research is kind of like a big sandbox to play in. We get to play with neat technology that if successful will have positive impact one the company. We span from circuits at the low end, better signaling, both on that organic substrate, I consider the photonics also circuits, all the way up to applications and graphics, perception and learning and robotics. Our goal is to learn as much with the least amount of effort, to optimize our impact on the company and as part of doing that second thing we involve the product people from day one in a new research project so that they both influence that project and have more impact on the company and gather ownership in it. We have had many successes, only a small number of them are listed here, and in my view what we really do is invent the future of Nvidia.

Dally Bio from Nvidia web site:

https://research.nvidia.com/person/william-dally

Bill Dally joined NVIDIA in January 2009 as chief scientist, after spending 12 years at Stanford University, where he was chairman of the computer science department. Dally and his Stanford team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. Dally was previously at the Massachusetts Institute of Technology from 1986 to 1997, where he and his team built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. From 1983 to 1986, he was at California Institute of Technology (CalTech), where he designed the MOSSIM Simulation Engine and the Torus Routing chip, which pioneered “wormhole” routing and virtual-channel flow control. He is a member of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Fellow of the IEEE and the ACM, and has received the ACM Eckert-Mauchly Award, the IEEE Seymour Cray Award, and the ACM Maurice Wilkes award. He has published over 250 papers, holds over 120 issued patents, and is an author of four textbooks. Dally received a bachelor’s degree in Electrical Engineering from Virginia Tech, a master’s in Electrical Engineering from Stanford University and a Ph.D. in Computer Science from CalTech. He was a cofounder of Velio Communications and Stream Processors.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in Computing vs. COVID-19: Fugaku, Congress, De Novo Design & More

July 2, 2020

Supercomputing, big data and artificial intelligence are crucial tools in the fight against the coronavirus pandemic. Around the world, researchers, corporations and governments are urgently devoting their computing reso Read more…

By Oliver Peckham

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time last year, IBM announced open sourcing its Power instructio Read more…

By John Russell

HPC Career Notes: July 2020 Edition

July 1, 2020

In this monthly feature, we'll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it's a promotion, new company hire, or even an accolade, we've got Read more…

By Mariana Iriarte

Supercomputers Enable Radical, Promising New COVID-19 Drug Development Approach

July 1, 2020

Around the world, innumerable supercomputers are sifting through billions of molecules in a desperate search for a viable therapeutic to treat COVID-19. Those molecules are pulled from enormous databases of known compoun Read more…

By Oliver Peckham

HPC-Powered Simulations Reveal a Looming Climatic Threat to Vital Monsoon Seasons

June 30, 2020

As June draws to a close, eyes are turning to the latter half of the year – and with it, the monsoon and hurricane seasons that can prove vital or devastating for many of the world’s coastal communities. Now, climate Read more…

By Oliver Peckham

AWS Solution Channel

Maxar Builds HPC on AWS to Deliver Forecasts 58% Faster Than Weather Supercomputer

When weather threatens drilling rigs, refineries, and other energy facilities, oil and gas companies want to move fast to protect personnel and equipment. And for firms that trade commodity shares in oil, precious metals, crops, and livestock, the weather can significantly impact their buy-sell decisions. Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This year is no different though the conversion of ISC to a digital Read more…

By John Russell

OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

July 2, 2020

Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time Read more…

By John Russell

Hyperion Forecast – Headwinds in 2020 Won’t Stifle Cloud HPC Adoption or Arm’s Rise

June 30, 2020

The semiannual taking of HPC’s pulse by Hyperion Research – late fall at SC and early summer at ISC – is a much-watched indicator of things come. This yea Read more…

By John Russell

Racism and HPC: a Special Podcast

June 29, 2020

Promoting greater diversity in HPC is a much-discussed goal and ostensibly a long-sought goal in HPC. Yet it seems clear HPC is far from achieving this goal. Re Read more…

Top500 Trends: Movement on Top, but Record Low Turnover

June 25, 2020

The 55th installment of the Top500 list saw strong activity in the leadership segment with four new systems in the top ten and a crowning achievement from the f Read more…

By Tiffany Trader

ISC 2020 Keynote: Hope for the Future, Praise for Fugaku and HPC’s Pandemic Response

June 24, 2020

In stark contrast to past years Thomas Sterling’s ISC20 keynote today struck a more somber note with the COVID-19 pandemic as the central character in Sterling’s annual review of worldwide trends in HPC. Better known for his engaging manner and occasional willingness to poke prickly egos, Sterling instead strode through the numbing statistics associated... Read more…

By John Russell

ISC 2020’s Student Cluster Competition Winners Announced

June 24, 2020

Normally, the Student Cluster Competition involves teams of students building real computing clusters on the show floors of major supercomputer conferences and Read more…

By Oliver Peckham

Hoefler’s Whirlwind ISC20 Virtual Tour of ML Trends in 9 Slides

June 23, 2020

The ISC20 experience this year via livestreaming and pre-recordings is interesting and perhaps a bit odd. That said presenters’ efforts to condense their comments makes for economic use of your time. Torsten Hoefler’s whirlwind 12-minute tour of ML is a great example. Hoefler, leader of the planned ISC20 Machine Learning... Read more…

By John Russell

At ISC, the Fight Against COVID-19 Took the Stage – and Yes, Fugaku Was There

June 23, 2020

With over nine million infected and nearly half a million dead, the COVID-19 pandemic has seized the world’s attention for several months. It has also dominat Read more…

By Oliver Peckham

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Leading Solution Providers

Contributors

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This