Senior Vice President & CTO
Steve Scott serves as Senior Vice President and CTO at Cray, responsible for guiding the long-term technical direction of the company’s supercomputing, storage and analytics products. Dr. Scott rejoined Cray in 2014 after serving as principal engineer in the Platforms group at Google and before that as SVP and CTO for NVIDIA’s Tesla business unit.
HPCwire: Steve, this is your third People to Watch win – I hope you don’t feel too watched. But seriously, congrats! You’ve been back now for four years at the company where you launched your career. What’s special about working at Cray?
Steve Scott: Cray is the only company in the world focused solely on the high-end HPC market. We’re filled with employees who are passionate about HPC, and we’ve got an incredible, forty-five year history advancing the field of supercomputing. I really enjoy working with the team here, and I like that Cray is large enough to do really interesting work, but small enough so that individuals can have a significant impact, and people can work on a wide variety of technologies and projects.
HPCwire: More generally, what excites you about working in high-performance computing?
I love seeing what people do with our systems. Computers are just tools, but it’s particularly gratifying to create tools that allow people to ask and answer fundamental questions about our universe, to find cures for diseases, to design better materials and products, to better forecast extreme weather events, to enhance national security, to develop clean energy sources, to reduce famine, and so much more. It makes you feel genuinely good about what you do each day. Oh, and I like the technology, too.
HPCwire: 2018 saw major announcements and development across Cray’s entire portfolio: the ClusterStor deal, a unified architecture in Shasta with support for an increasingly diverse processing elements and a new interconnect with Slingshot. What’s the overall technical strategy and focus for Cray this coming year?
2019 is all about delivering on our next-generation Shasta platform. Shasta unifies everything we’ve been doing over the past several years across multiple product lines. What used to be separate systems for analytics, AI, commodity computing, and high-end simulation and modeling are now converged in Shasta. We’re even pulling what used to be an externally-attached storage system directly into the system, complete with sophisticated tiering between high-performance flash and high-capacity disk partitions. Everything is tied together by our new Slingshot interconnect, which provides surprisingly good performance at scale, as well as performance isolation across increasingly diverse workloads.
Shasta will let us deliver efficient exascale systems at the high end, but also provide excellent support for the growing AI/analytics market, and further expand our commercial business.
HPCwire: Generally speaking, what trends and/or technologies in high-performance computing do you see as particularly relevant for the next five years? Also, what’s your take on near-term prospects for quantum computing and neuromorphic technologies?
I see two trends having a major impact on high performance computing. The first, on the technology side, is the plateauing of CMOS performance, which is leading to a significant growth in architectural diversity as people use specialization as a means to enhance performance. GPU computing, increased use of FPGAs, and the crop of emerging AI accelerators are three examples of this trend, but we’re also seeing growing diversity in memory systems suitable for different workloads. Embracing this diversity is a core tenant of our Shasta design.
The second, on the workloads side, is that machine learning/AI is gaining traction across a wide set of markets and problem domains, with the promise of replacing or significantly enhancing traditional simulation and modelling. Systems of the future will not have the luxury of focusing on one workload or the other; they will need to be designed for converged, hybrid workflows.
I get asked about quantum computing a lot. I believe it holds significant promise for certain classes of problems, but it has major engineering challenges and is still likely more than 10 years away from being truly practical. When it arrives, it will have a huge impact in some domains, but will only address a portion of the overall HPClandscape, and will need to be augmented by fast classical computers to run other parts of the workload and support the quantum calculations.
Neuromorphic computing is really interesting, and there’s likely some benefit from more closely modeling biological neurons, and from using the timing information inherent in spiking models. But it’s going to be very difficult to catch up to classical deep neural networks, which have proven to be extremely useful in practice, and which are pervasively deployed today. I suspect that the continued investment and research in DNNs is going to keep them ahead of neuromorphic approaches for the foreseeable future.
HPCwire: Outside the professional sphere, what activities, hobbies or travel destinations do you enjoy in your free time?
Although I’m on the road more than I’d like for work, my wife and I love traveling, both to new cities around the world, and to familiar places like London, New York, or some of our great National Parks. In the summers, we spend as much time as we can at our cabin on a lake in northern Wisconsin, cooking, swimming, playing games, and watching sunsets. I’ve been an avid skiier most my life, and really enjoy volleyball, though haven’t found the time to play for a while.