People to Watch 2019

Thierry Pellegrino
VP Business Strategy & GM HPC
Dell EMC

Thierry Pellegrino is vice president of business strategy for Server and Infrastructure Systems at Dell EMC and general manager for Dell EMC’s high performance computing (HPC) business. A 20-year Dell veteran, Thierry is an experienced leader with a background in engineering, go-to-market, technology and strategy. During his tenure at Dell, Thierry led the first Dell converged infrastructure product (M1000e), assembled the Global OEM Custom Engineering organization, led technology strategy for the Dell and EMC combination.

In his current role, he leads all aspects of the global HPC business for Dell EMC (engineering across the full portfolio, product management, go-to-market and enablement). Thierry also forms and shapes the business strategy direction for multiple value chain organizations tied to the Dell EMC server business for the next 5-10 years, working closely with product teams and the CTO organization.

Thierry has a strong global background. Born and raised in France, he also speaks five languages. He spent his last 20 years in Austin, Texas, where he currently lives with his family.

HPCwire: Hi Thierry, congratulations on your selection as a 2019 HPCwire Person to Watch. Having just finished a full year as the leader of Dell EMC’s HPC strategy, perhaps you could summarize the major changes initiated and milestones hit last year and briefly outline your second year HPC agenda?

Thierry Pellegrino: We measure our milestones by our customers’ achievements. Recently, The University of Michigan’s Great Lakes system was the world’s first Mellanox HDR InfiniBand deployment. Ohio Supercomputer Center’s Pitzer was our first large-scale Direct Contact Liquid Cooling implementation in collaboration with CoolIT. The University of Cambridge’s Cumulus topped the IO-500 list, and TACC won the NSF award to bring online the most powerful academic system in the world in a few months.

We also are seeing more companies use HPC solutions for AI-enabled innovation and productivity. For example, Mastercard shared how they protect customers from fraud using AI at the Dell Technologies analyst summit. Ziff.ai shared their AI-powered innovations in image analysis as an example of how AI has rapidly created a new startup ecosystem. Meanwhile, Zenuity is using AI to improve sensor data analysis and mapping that will enable Volvo to deploy incredibly safe autonomous vehicles.

We aim to keep that momentum going in 2019 as more customers turn to Dell EMC for advice and help in building their HPC and AI environments in academic research and commercial applications.

Many see that analytics and AI are essentially big‐data problems that require powerful compute, networking and storage. Therefore, HPC technologies are now being used to enable high performance data analytics and for training machine learning models, enabling researchers and companies alike to gain new insights and understanding from vast digital data and complementing traditional HPC simulation approaches. At the same time, HPC workloads are increasingly becoming more data centric, adding AI technologies, increasing the capabilities of traditional HPC modeling and simulation.

We are now beginning to see the confluence of simulation, data analytics and AI in research and in industry, enabled by converged HPC solutions with well-balanced, high- performance storage and IO capabilities.

And speaking of storage, as the No. 1 storage company in the world, this year, we’re going to highlight our comprehensive storage portfolio — from fast scratch all the way to cloud storage – something other HPC vendors just don’t have. While we’re determined to keep our core strong, we’re also focused on helping our partners to be more profitable.

HPCwire: AI writ large is dramatically reshaping how we think of and use HPC. In this context how must the HPC user (academic and enterprise) community’s practice, attitudes, and skill sets change? What technologies will lead AI infusion throughout HPC and where are needed skills (and technologies) most lacking?

One of the things I love about HPC is that it is leading edge—it’s where innovation starts. Our customers have been working with AI for quite some time; yet, many are struggling with the definition of HPC. For example, some might say that financial services, Hadoop or OpenStack are not HPC. At the same time, there is this increasing diversity of applications, workloads and approaches leveraging HPC technologies and strategies. This becomes even truer when tied to the growth of multi-cloud computing for increased scale and vast edge computing resources to conduct inferencing and streaming analytics at scale.

The big data explosion, coupled with technology advances (modeling and software, but also infrastructure), has made it possible to train AI models for a number of prediction and automation use cases, and data sets continue to grow exponentially. Of course, training with these enormous, complex data sets is computationally intensive, and that’s where HPC comes in.

Formerly the domain of specialists using expensive, proprietary supercomputers, recent advances in compute, networking and storage technologies have made HPC—and thus data analytics and AI—available using small clusters. This changes the game for more traditional HPC in academic and government institutions and life sciences firms, but it also puts AI within reach for a wider range of use cases. For example, enterprises that have been collecting data for years now can analyze historical data using AI algorithms to gain better market insights, increase efficiency, and recognize higher ROI for data‐driven investments. This turns CapEx and OpEx burdens into new revenue opportunities.

How must the HPC user (academic and enterprise) community’s practice, attitudes, and skill sets change?

Data science is hotter than ever! HPC experts have gone from geeks to critical members of the IT community. Providers need to be ready for the wave of people who want to take advantage of technology to gain new insights, create new lines of business, and automate for speed and efficiency. This means we need to offer HPC solutions that are optimal for analytics and AI, as well as simulation, and offer training and support for customers’ to collect and curate data, develop and train AI models, and deploy trained models. We’re doing this while measuring their effectiveness and retraining, as necessary, to maximum results and ROI.

These offers are bringing together high levels of compute, I/O and storage, combined with data analytics and machine learning frameworks delivered in a flexible, yet secure way.

With data multiplying every second, HPC-enabled machine learning training will go from experimentation to production models deployed for inferencing, honing-in on and automating the items with the greatest return on investment. Applications and infrastructure must quickly and easily scale as data scales, while jobs are going to change as data grows and AI algorithms and tools evolve rapidly. (There’s also a whole line of new jobs around categorizing and tagging data.)

We need to continue our commitment to making HPC more accessible, enable the HPC community to continue to grow and share advances, and continue to push the boundaries of new and disruptive technologies, as we all help shape the future of AI.

What technologies will lead AI infusion throughout HPC and where are needed skills (and technologies) most lacking?

Technologies leading AI infusion include:

  • Software: Embedding AI into software platforms, e.g. extensions powered by AI, or software with native machine learning capabilities
  • Multi-cloud: Native cloud-driven self-service business intelligence platforms infused with AI capabilities that can provide governed data discovery
  • Virtual and augmented reality: immersive technologies that are opening up new use cases to transform experience, improving simulation and fast-tracking productdevelopment
  • 3D/4D printing: with robotics, big data and automation, this is set to impact design and engineering workflows, opening up new opportunities for product creation
  • IoT and the new era of connected data at volume: Via sensors, this provides an opportunity to move beyond algorithmic simulation to create new connected design and engineering workflows where data that informs iterations can reach from designers to the shop floor and back again in real-time.

Coupled with 5G, IoT is going to revolutionize our ability to created connected autonomous vehicles that talk to each other and to roads and signs, enabled smart mobility. IoT plus AI will enable smart buildings, schools, factories, hospitals, venues and more to become the norm, and even communities and full cities will be imbued with analysis and automation that improves living, working, learning and playing. The confluence of analytics and AI, IoT and 5G, and simulation will create entirely new possibilities for our environments and will require a comprehensive set of skills to make this all work together with great reliability and security.

I don’t know anyone who has all of these skills, but we need to have businesses collaborate more closely with universities, to produce graduates with them.

HPCwire: With each new posting of the Top500, debate swells over its value and then subsides until the next posting. We seem to like ‘lists’. What’s the Dell EMC perspective on the value of the Top500 and what, if any, are its aspiration to have a presence on the list and why?

While some have misused the list, the Top500 competition is a driver for everyone in the industry to push the boundaries of technology, to try new things and create new ways to achieve greater performance and scale. We are proud to have has many Top500 systems over the years, including multiple top-10 systems at TACC—with another, Frontera, coming in the next few months. One of our Dell EMC HPC & AI Innovation Lab systems, Zenith, is continuously upgraded and in the Top500, as well. Such leadership-class systems demonstrate how our solutions scale optimally, and that we have expertise to help customers scale their workloads. They are valuable tools for helping us create the best solutions for all customers.

And that’s what is more important for us. We want to be the company that helps more people achieve more innovations and discoveries than any other HPC vendor. The use of HPC in research and academia is pervasive, and now enterprises are using HPC more than ever, having a greater impact on the HPC market and industry. As much as we value systems like Frontera and Zenith and other Top500 systems, what drives us is making HPC available to customers in every field, for every workload that needs performance to accelerate innovation and to achieve new understanding through simulation, analytics or AI.

HPCwire: Generally speaking, what trends and/or technologies in high-performance computing do you see as particularly relevant for the next five years? Also, what’s your take on near-term prospects for quantum computing and neuromorphic technologies?

There are several trends worth noting:

  •  Never-ending pushes to increase HPC performance and scale, since direct numerical simulation at atomic, molecular, or cellular levels of full real-world problems still greatly exceeds the scale of even the largest systems
  • Continued rapid expansion of HPC for data analytics and AI where the data gold mine will spark the next gold rush in tech investments
  • 5G will have us living on the edge, connecting more devices, cars and systems—and requiring HPC technologies to process vast streaming data and to train AI models for enabling billions of devices to conduct real-time, high-fidelity AI inferencing
  • Multi-cloud environments will drive automation, AI/ML processing into high gear, while offering customers greater flexibility and cost control
  • Gen Z will enable disaggregation of components, enabling even more scalable systems with superior balance and utilization for different workloads, thus further enhancing customers’ capabilities and ROI
  • Organizations will accelerate ways to design waste out of their business models through new innovation in recycling and closed loop practices.

While the race is on for quantum computing, it’s likely around a decade away. Neuromorphic technologies are much closer—as companies have already demonstrated neuromorphic chips modeled on biological brains—promising to accelerate AI.

HPCwire: Outside of the professional sphere, what can you tell us about yourself – personal life, family, background, hobbies, etc.? Is there anything about you your colleagues might be surprised to learn?

At Dell Technologies, I’ve worked in engineering, CTO and strategy. Born and raised in France (Paris, Strasbourg, Lyon, Toulouse), I speak five languages. I’ve been in the US for the past 23 years but still travel globally extensively. I love my two children, Chiara and Vitali. I am passionate about travel, international real estate, cars, food and wine.

Lori Diachin
ECP
Talia Gershon
IBM
Gopal Hegde
Cavium/Marvell
Steve Oberlin
Nvidia
Jim Keller
Intel
Ken King
IBM
Gregory Kurtzer
Sylabs/Singularity
Forrest Norrod
AMD
Thierry Pellegrino
Dell EMC
Michela Taufer
SC19 Chair
Steve Scott
Cray
Jack Wells
OLCF

 

 

Leading Solution Providers

Contributors

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

HPCwire