Senior Vice President and General Manager of the Datacenter and Embedded Solutions Business Group
Forrest Norrod is senior vice president and general manager of the Datacenter and Embedded Solutions Business Group at AMD. In this role, he is responsible for managing all aspects of strategy, business management, engineering and sales for AMD datacenter and embedded products. Norrod has more than 25 years of technology industry experience across a number of engineering and business management roles at both the chip and system level.
Norrod holds Bachelor of Science and Master of Science degrees in electrical engineering from Virginia Tech and holds 11 US patents in computer architecture, graphics and system design.
HPCwire: Congratulations on your inclusion in our People to Watch list. How do you and other managers at AMD view 2018, taken as a whole?
Forrest Norrod: 2018 marked another year of strong growth across our high-performance CPU and GPU product lines. We are pleased with the progress we’ve made in the datacenter, and we had a number of marquee wins in 2018. We expect to see many more wins in 2019 as we continue our momentum and see adoption in all areas of the market.
AMD is making runs at CPU and GPU market share. How does a smaller company compete successfully against larger companies?
AMD has a long history of disrupting the status quo through innovation. AMD had a major part of crafting many of the foundational architectural elements of the modern HPC server. AMD firsts include support for x86 64-bit code; multiple processor cores on a single chip; high-performance, scalable interconnects that allows the system to scale up or down as needed, and integrated memory controllers to better feed the cores. Virtualization technology is also fundamental today and AMD drove the first virtualization hardware support allowing the server to be sliced up in many different virtualized services that are easily deployed at scale.
It’s about addressing real customer needs that our competitor might not want to address. For example, AMD introduced a 64-bit x86 when the competitor was promoting a proprietary architecture. More recently, our no-compromise single-socket server EPYC processors allow customers to buy the right size and the right system for their workload without compromising on performance, reliability or features. This allows customers to duck the constraints that force users of the competitor-based systems into a two-socket server when a single-socket system would offer a better choice.
As we came back into the market with EPYC we were unconstrained by any concern except addressing customer needs. EPYC is a result of AMD focusing on maximizing performance and density while reducing complexity to deliver greater choice, customization and cost savings.
What is the AMD perspective on HPC and AI and what can we expect from AMD in the year ahead on these fronts?
HPC/AI fundamentally depends on compute performance, I/O bandwidth and memory throughput. AMD is bringing:
· Chiplet design to unlock the full power of 7nm (with our 7nm AMD EPYC processor – codenamed “Rome” due out mid-year).
· Much higher peak floating-point performance, more memory bandwidth, more support for heterogeneous systems based on AMD Radeon GPUs or accelerators from other companies.
· Advances in I/O – being the first to market with PCIe Gen 4 to enable better connections between CPUs and accelerators.
We are committed to this market for the long term, our product roadmap is on track, and we are engaged across the ecosystem to change the datacenter with EPYC and Radeon Instinct GPUs. The time is right for AMD and for our customers and partners.
Generally speaking, what trends and/or technologies in high-performance computing do you see as particularly relevant for the next five years?
On the CPU side, it’s adding more capability in the CPU for memory and compute intensive applications – getting more memory bandwidth and more throughput performance. And on the highest performance systems, it’s going to be “heterogenous” systems where the high-performance CPUs work closely with accelerators. In the next few years we believe you’ll see the accelerators and CPUs becoming full peers with memory coherency between the two. This provides both a great opportunity and challenge for developers to unlock the full performance of that approach.
Outside of the professional sphere, what can you tell us about yourself – personal life, family, background, hobbies, etc.? Is there anything about you your colleagues might be surprised to learn?
Outside of work, you can usually find me running around with my family. I have four kids (16-year-old triplets and a 13-year-old). We all have a real passion for travel and my kids have been fortunate to have already visited all seven continents. I also love to keep my hand in engineering projects and continue to do coding and design projects and mentor and coach student robotics teams.
Before joining AMD, Norrod most recently was vice president and general manager of the Server Business at Dell from December 2009 to October 2014, driving the business to market share leadership in several key geographies and markets while delivering consistent revenue and profitability growth. In his role as vice president and general manager of Dell Data Center Solutions, Norrod successfully led the creation of the company’s first internal startup, which established Dell’s leadership presence in the hyper-scale datacenter market. He joined Dell as CTO of Client Products in August 2000, then led the company’s Enterprise Engineering before ultimately having responsibility for all of Dell’s global engineering teams.
Prior to Dell, Norrod worked at Cyrix Corp from 1993 to 1997 and National Semiconductor from 1997 to 2000 leading the integrated x86 CPU businesses.