This week, we are presenting our interview with 2019 Person to Watch Forrest Norrod as part of our HPCwire People to Watch focus series. Forrest is senior vice president and general manager of the Datacenter and Embedded Solutions Business Group at AMD, where he manages all aspects of strategy, business, engineering and sales for those products.
Forrest has more than 25 years of experience in the industry. In his previous role as vice president and general manager of Dell Data Center Solutions, he successfully led the creation of the company’s first internal startup. In addition to his Bachelor of Science and Master of Science in electrical engineering from Virginia Tech, Forrest holds 11 U.S. patents in computer architecture, graphics and system design.
HPCwire: Congratulations on your inclusion in our People to Watch list. How do you and other managers at AMD view 2018, taken as a whole?
Forrest Norrod: 2018 marked another year of strong growth across our high-performance CPU and GPU product lines. We are pleased with the progress we’ve made in the datacenter, and we had a number of marquee wins in 2018. We expect to see many more wins in 2019 as we continue our momentum and see adoption in all areas of the market.
HPCwire: AMD is making runs at CPU and GPU market share. How does a smaller company compete successfully against larger companies?
AMD has a long history of disrupting the status quo through innovation. AMD had a major part of crafting many of the foundational architectural elements of the modern HPC server. AMD firsts include support for x86 64-bit code; multiple processor cores on a single chip; high-performance, scalable interconnects that allows the system to scale up or down as needed, and integrated memory controllers to better feed the cores. Virtualization technology is also fundamental today and AMD drove the first virtualization hardware support allowing the server to be sliced up in many different virtualized services that are easily deployed at scale.
It’s about addressing real customer needs that our competitor might not want to address. For example, AMD introduced a 64-bit x86 when the competitor was promoting a proprietary architecture. More recently, our no-compromise single-socket server EPYC processors allow customers to buy the right size and the right system for their workload without compromising on performance, reliability or features. This allows customers to duck the constraints that force users of the competitor-based systems into a two-socket server when a single-socket system would offer a better choice.
As we came back into the market with EPYC we were unconstrained by any concern except addressing customer needs. EPYC is a result of AMD focusing on maximizing performance and density while reducing complexity to deliver greater choice, customization and cost savings.
HPCwire: What is the AMD perspective on HPC and AI and what can we expect from AMD in the year ahead on these fronts?
HPC/AI fundamentally depends on compute performance, I/O bandwidth and memory throughput. AMD is bringing:
· Chiplet design to unlock the full power of 7nm (with our 7nm AMD EPYC processor – codenamed “Rome” due out mid-year).
· Much higher peak floating-point performance, more memory bandwidth, more support for heterogeneous systems based on AMD Radeon GPUs or accelerators from other companies.
· Advances in I/O – being the first to market with PCIe Gen 4 to enable better connections between CPUs and accelerators.
We are committed to this market for the long term, our product roadmap is on track, and we are engaged across the ecosystem to change the datacenter with EPYC and Radeon Instinct GPUs. The time is right for AMD and for our customers and partners.
HPCwire: Generally speaking, what trends and/or technologies in high-performance computing do you see as particularly relevant for the next five years?
On the CPU side, it’s adding more capability in the CPU for memory and compute intensive applications – getting more memory bandwidth and more throughput performance. And on the highest performance systems, it’s going to be “heterogenous” systems where the high-performance CPUs work closely with accelerators. In the next few years we believe you’ll see the accelerators and CPUs becoming full peers with memory coherency between the two. This provides both a great opportunity and challenge for developers to unlock the full performance of that approach.
HPCwire: Outside of the professional sphere, what can you tell us about yourself – personal life, family, background, hobbies, etc.? Is there anything about you your colleagues might be surprised to learn?
Outside of work, you can usually find me running around with my family. I have four kids (16-year-old triplets and a 13-year-old). We all have a real passion for travel and my kids have been fortunate to have already visited all seven continents. I also love to keep my hand in engineering projects and continue to do coding and design projects and mentor and coach student robotics teams.