VP & GM of Accelerated Computing Group
Ian Buck is NVIDIA’s General Manager for GPU Computing Software, responsible for all engineering, 3rd party enablement, and developer marketing activities for GPU Computing at NVIDIA. Ian joined NVIDIA in 2004 and created CUDA, which remains the established leading platform for accelerated based parallel computing. Before joining NVIDIA, Ian was the development lead on Brook which was the forerunner to generalized computing on GPUs. He holds a Ph.D. in Computer Science from Stanford University and B.S.E from Princeton University.
HPCwire: Hi Ian. Congratulations on being named an HPCwire Person to Watch in 2017!
NVIDIA established general-purpose GPU (GPGPU) computing and ushered in an era of accelerated computing, but what’s next for NVIDIA in HPC?
Ian Buck: What’s next for NVIDIA in HPC is accelerating intelligence and imagination. Through NVIDIA’s GPU computing and deep learning platform, we are enabling technologists around the world to fuse the technical disciplines of high performance computing with artificial intelligence to help society solve what was once unsolvable. Life-changing advances in the fields of healthcare, autonomous driving and more are being fueled by the intersection of HPC with AI.
HPCwire: How has the rise of machine learning impacted the GPU roadmap?
Machine learning has had a significant impact on our roadmap and that of our partners. From Kepler, to Maxwell, to Pascal, NVIDIA has been investing in improving our hardware and software to accelerate deep learning training and inference, improving performance by over 65 times. We are not just redesigning the GPUs, but with the inventions of the NVLink processor interconnect and the DGX-1 AI System, we are rethinking the way GPUs communicate with each other and the systems they go into.
HPCwire: As the inventor of CUDA, can you comment on CUDA’s success and its role in driving GPU adoption professional computing?
CUDA was created over 10 years ago and still going strong, evolving to help accelerate workloads that we believe GPUs are uniquely suited to accelerate. In the beginning, the mission was to transform computing by making the parallel performance of a GPU accessible to anyone who knew C, C++, or Fortran. What made it so successful was users that realized that once a program adopted CUDA, every generation of GPU improved their performance not by percentages but by x-factors. This continues today where it is simply impractical to run some applications without a GPU.
HPCwire: As companies chase AI dollars, competition is ramping up from established players and from newer entrants alike. How will NVIDIA maintain its leadership?
We are focused on continuing to build the world’s most powerful AI platform which includes not just the hardware but all the software and developer capabilities leveraged from years of investment in our CUDA platform. In addition, our architectures are evolving at a rapid pace, adding the latest AI instruction or capability directly into hardware. Because of this holistic approach, we are improving AI performance 10x with every generation.
HPCwire: Outside of the professional sphere, what can you tell us about yourself – personal life, family, background, hobbies, etc.? Is there anything about you your colleagues might be surprised to learn?
I’ve been a fan of computer graphics all my life and wrote my first GPGPU program back on a SGI Octane in college. To this day I still dabble with reverse engineering vintage video game hardware, though with four kids and busy work life, maybe not as much as I’d like.
| Guangwen Yang