Stathis Papaefstathiou Takes the R&D Reins at Cray

By Tiffany Trader

January 26, 2017

Earlier this month, Cray announced that tech veteran Stathis Papaefstathiou had joined the ranks of the iconic supercomputing company. As senior vice president of R&D, Papaefstathiou will be responsible for leading the software and hardware engineering efforts for all of Cray’s research and development projects. He is replacing Peg Williams who is retiring after more than a decade with Cray, but will be staying on in a transition period for a few months.

Papaefstathiou’s tenure in technical computing covers a 30-year span. Most recently, he was the SVP of engineering at the Aerohive Networks, where he led product development for a portfolio that includes network hardware, embedded operating systems, cloud-enabled network management solutions, big data analytics, DevOps and mobile applications. Previously, he spent two years leading cloud development efforts at F5 Networks and more than six years at Microsoft, starting as a computer science researcher before being promoted to general manager in charge of robotics.

HPCwire spoke with Papaefstathiou to get a sense of how his enterprise and cloud background will be leveraged at Cray as well has his larger vision and execution strategy.

HPCwire: Stathis, please introduce yourself and tell us about your background and how you came to this position.

Papaefstathiou: My background originally was in the HPC space. In the 90s I worked in a business unit as a post doc and researcher in HPC. It was a very exciting time then in HPC because there were many different architectures and technologies. There was also a lot of optimism about the future so people were trying to create single solutions that would solve all types of problems. I had the opportunity to work with the Cray YMP and [another Cray system]. My work primarily was to understand how to model the architectures, the hardware architecture and describe applications in a way that the customers of the technology could match best their application with their appropriate hardware architecture.

As I mentioned, in the 90s there were a lot of different types of supercomputers, from the SIMD connection machine to massively parallel computers to shared memory computers and so on. So customers needed to understand that before they made a commitment to a certain model that their application would run well. So the various agencies were funding research in order to build these kind of predictive systems.

For me Cray is obviously an iconic company. It’s a great honor after working in the HPC community to have the opportunity to work for Cray. It’s a very interesting industry because you always have to fight with the trends of commoditization. You always have to be on the bleeding edge of building new technologies. This is something extremely exciting for an engineer, so having this opportunity to be working always on the latest technology you don’t have the opportunity to do this in many places.

Finally for Cray, I believe that the last few years the company has embarked on this journey in going beyond the traditional HPC market and expanding and I think this is a very promising direction, but at the same time it’s very exciting, because it’s an inflection point for the company to have contribution there.

HPCwire: I understand you started out in HPC, but your most recent roles were very much in the enterprise datacenter/cloud realm as opposed to the traditional HPC space – and in the last couple years, Cray has really been promoting the convergence of supercomputing and big data.

Papaefstathiou: There is definitely convergence of technologies between enterprise cloud and HPC. I think one of the things that was sort of profound to me was that in my previous role I was the SVP of engineering for Aerohat Networks and this is a company that is building hardware for the edge of the network but one of the differentiators against the market is that it collects data from this networking infrastructure in order to create business intelligence analytics as to how the network is being used but also how this data can contribute to the bottom line of the business. For example if you are a retail company, you may want to know what is the traffic that you have in your different physical stores or where the customer is spending more of their time within your store. So this is a type of data analytics that Aerohat is working on.

So part of my role was to build the solution from the ground up – this big data analytics solution. Of course we were working in the public cloud like most companies start, and I realized a couple of things that were not obvious to me when we built the solution. The first one was that actually building the solution – this big data real-time solution with pretty substantial scale – it was hard to do, especially if you take into consideration some of the constraints or characteristics of the cloud architecture, things like you don’t have guarantees in latency, that you need to build a solution that has to be designed for fault tolerance from the ground up because you never know when you’re going to have a fault in the resources that you’re using in the cloud. So it was a very painful process of building the solution. The second thing that was sort of interesting is that at a certain scale of this solution, the cost benefit of using the public cloud changes. One of the things that I find very exciting about the work that Cray is doing in the analytics space is that there is a class of problem, in terms of scale and complexity, where Cray supercomputing might be a better solution than public cloud. So while at the same time we have the convergence of the technologies, we do have differentiation in the supercomputing space for the big data analytics and machine learning solutions.

HPCwire: What are the products/technologies your teams will be working on in 2017?

Papaefstathiou: The first thing is getting into the exascale phase. We are working toward the next-generation of supercomputing. What’s interesting is in addition to the performance aspect, which is very important here, we have gained in the last few years a lot of experience building solutions for broad range of workloads, so already today we have our cluster line, an analytics line with Urika, and of course a supercomputing line. As we move forward, it’s about creating a lego model where we can take and combine technologies to support different use cases at different scale, using the same stack of technology. We have already have started doing this in 2016, for example Urika GX is coming with Aries network, so we combine our supercomputing technology with our cluster technology and we build a use case. So we already have started doing that but now we’re thinking more and more about how to easily be able to create this type of solution in a much more iterative and organized way.

I do believe that more and more of these supercomputing solutions will benefit smaller companies that are now doing analytics and machine learning, and they’re looking for the right type of computation platforms to solve these problems.

HPCwire: What is your interest in containers?

Papaefstathiou: Containers are a very useful tool for us. One of the things which is expensive in the supercomputing world is to update the system with a new type of new software stack on top of the hardware. Containers provide us a way to easily make upgrades to the system in a very lightweight manner without having to make any change in the operating system, without having to impact the other parts of the software stack. So if, for example, you want to change your analytics solution and upgrade to the latest version, it’s very easy to just update the container in the compute node instead of having to bring up nodes from the ground up and update the whole stack. So that’s one use of the containers, obviously as we move forward, we can use containers for other types of use cases, for example multitenancy, which is a very good scenario because we are going to have multiple workloads running on the big systems so being able to use containers as mechanism to isolate compute nodes amongst the different workloads is an interesting application. And finally containers can be used so you can build your application using our programming tools, package it in a container and be able to send to supercomputing nodes, it becomes a way to democratize the development of the code because you can do it in a very contained way; you can package in a contained way and send to the supercomputer to run it.

HPCwire: Thoughts on burst buffers and what will see from Cray in that area?

Papaefstathiou: We continue to collaborate with NERSC on that, as well as containers. DataWarp is a very important technology for us and I think it’s going to be a great tool for us to get to exascale because moving data in and out of the system from the compute nodes to the storage at the scale of exascale really becomes a major problem so having Datawarp and the burst buffer architecture there in between these two layers of the system will be a very critical advantage that we have at Cray to solve these workloads at scale.

HPCwire: What are your major impressions of the state of HPC today? Trends, inflection points, future directions?

Papaefstathiou: I think that deep learning is a use case that can benefit from the use of HPC technologies. The work we did with Microsoft a few months back with the cognitive libraries, porting them to Cray and being able to get a lot of benefit there both scale and time to execution is an example of how supercomputing can be used there. Also the plethora of processor architectures available to our customers now, the GPUs, the manycore/multicore systems, Xeon Phi and the traditional Intel processors – these can be matched to specific workload requirements. I was telling you before about the this lego model where you can take different types of technologies and put them in the same system and customize effectively the system for your workload, I think we will see more and more of this happening.

I do believe that the ability of HPC technologies within the cloud front end – that’s also another exciting possibility because effectively we will democratize the use of HPC technologies for a broader audience. Now there is a course bar for somebody to get into this space. With cloud there is a possibility with the cloud providers hosting high-performance computers, that might be a way for the broader community to access this technology.

HPCwire: Interesting to hear you say that because earlier you mentioned how some of the people using cloud and cloud-like solutions could benefit from a more traditional product but the converse is also true.

Papaefstathiou: Absolutely and I’ll give you an obvious example. One of the problems we will have in exascale, is doing system management at huge scale, being able to collect data — monitoring data, performance data — from tens of thousands of nodes, and being able to manage them and analyze them, and create troubleshooting optimization based on that — it’s a very hard problem. Already folks are doing this in the cloud community. Now there are some differences there, some adjustment has to take place, but this example of system management technology that can be used in the cloud can also be applied with some adjustments around supercomputing.

HPCwire: Speaking of exascale, what is your vision for exascale at Cray and can you speak to how exascale benefits will accrue to commercial HPC users?

Papaefstathiou: Exascale is very interesting. Because of the way they have organized the program [the US Exascale Computing Project], exascale is not about writing a benchmark and getting exascale performance; it’s about getting applications to run with exascale performance. This means that the system, the application and the whole stack, has to be thought of very holistically and solve a lot of hard problems in order to get to this level. Things for example that in the past might not have been in the critical path of performance of applications or the system, now become critical. We’re going to have to address problems that we didn’t have to this extent in the past and I mentioned two of them. One is system management, which in the past was an interesting problem, but now being able to collect all this data, being able to push the OS image to so many nodes, being able to do this efficiently and being able to upgrade the system efficiently — that will become a critical path in creating exascale systems. We talked about Datawarp — thinking about how to bring in data in and out of the exascale system, these will be very hard problems that have to be solved in order to meet this goal.

One of the things we have started doing is working on applying the very high-end technology we are building for the big supercomputers to a broader market and I gave the Urika GX example, where we took the Aries network that was designed for the supercomputer and put it into much smaller form factor that can benefit a much broader community of enterprises, for example, that are doing analytics — I think there is going to be an opportunity for some of these technologies to go downstream toward this broader market as we move forward and we’re thinking about this and we already have products in the market and will continue doing this in the future.

HPCwire: Are you actively focusing on meeting the requirements for the big Aurora supercomputer right now — is that one of the main things on your list?

Papaefstathiou: Yes, this is one of the drivers for getting to the exascale goal, absolutely. We do this often. We have these projects that are sort of the pilots in order to solve some of these hard problems to get to this goal. We’re working very hard on Aurora at this moment.

HPCwire: What else can you tell me about your larger vision for this position and some of the greater company goals you’ll be working to achieve?

Papaefstathiou: Peg Williams is my predecessor and she did a fantastic job building a very high performance team here. One of the things I realized when I joined was that the baseline of the team is very high. We do have some new dynamics that are happening because we have a really broad product portfolio today. We support a lot of technologies. We have new products that we are introducing in the market, some of them are beyond traditional HPC, for example our Urika analytics product line. Finally we have this convergence of technologies. Some of the technologies that are used in the cloud or in enterprise now can be used in HPC. This means from the team perspective while in the past, we were working with a traditional HPC cadence in terms of execution, now we need to go and mimic in some occasions some of the dynamic nature of the cloud and enterprise side. This is reflected both in terms of engineering systems and engineering process. So we are going to also see convergence in terms of the engineering process and the organization approach in order to capture this requirement.

The other area is that the one thing that it’s not well known in the engineering community is how Cray products are really impactful in solving some of the hard problems of the world in basic science, in the different enterprises and so on. I think there is a great opportunity for us to create the messages for this community beyond traditional HPC through our communication of our mission, through creating excitement around the technologies that we’re developing and creating momentum behind HPC in general and Cray. And for that purpose, we need to provide the right environment both for our employees and for the friends of the company, so there really is also an opportunity there for us to get outside of traditional HPC and approach the broader engineering community.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: Wind Farms, Gravitational Lenses, Web Portals & More

February 19, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from the nanoscale to the astronomic, from calculating quantum effe Read more…

By Ken Strandberg

What Will IBM’s AI Debater Learn from Its Loss?

February 14, 2019

The utility of IBM’s latest man-versus-machine gambit is debatable. At the very least its Project Debater got us thinking about the potential uses of artificial intelligence as a way of helping humans sift through al Read more…

By George Leopold

HPE Extreme Performance Solutions

HPE Systems With Intel Omni-Path: Architected for Value and Accessible High-Performance Computing

Today’s high-performance computing (HPC) and artificial intelligence (AI) users value high performing clusters. And the higher the performance that their system can deliver, the better. Read more…

IBM Accelerated Insights

Medical Research Powered by Data

“We’re all the same, but we’re unique as well. In that uniqueness lies all of the answers….”

  • Mark Tykocinski, MD, Provost, Executive Vice President for Academic Affairs, Thomas Jefferson University

Getting the answers to what causes some people to develop diseases and not others is driving the groundbreaking medical research being conducted by the Computational Medicine Center at Thomas Jefferson University in Philadelphia. Read more…

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst of bankruptcy proceedings. According to Dutch news site Drimb Read more…

By Tiffany Trader

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from th Read more…

By Ken Strandberg

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

Iowa ‘Grows Its Own’ to Fill the HPC Workforce Pipeline

February 13, 2019

The global workforce that supports advanced computing, scientific software and high-speed research networks is relatively small when you stop to consider the magnitude of the transformative discoveries it empowers. Technical conferences provide a forum where specialists convene to learn about the latest innovations and schedule face-time with colleagues from other institutions. Read more…

By Elizabeth Leake, STEM-Trek

Trump Signs Executive Order Launching U.S. AI Initiative

February 11, 2019

U.S. President Donald Trump issued an Executive Order (EO) today launching a U.S Artificial Intelligence Initiative. The new initiative - Maintaining American L Read more…

By John Russell

Celebrating Women in Science: Meet Four Women Leading the Way in HPC

February 11, 2019

One only needs to look around at virtually any CS/tech conference to realize that women are underrepresented, and that holds true of HPC. SC hosts over 13,000 H Read more…

By AJ Lauer

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Assessing Government Shutdown’s Impact on HPC

February 6, 2019

After a 35-day federal government shutdown, the longest in U.S. history, government agencies are taking stock of the damage -- and girding for a potential secon Read more…

By Tiffany Trader

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This