TACC’s Dan Stanzione Shares Vision for Leadership Computing

By Tiffany Trader

December 14, 2022

Texas Advanced Computing Center Director Dan Stanzione and HPCwire Managing Editor Tiffany Trader met in Dallas to discuss the biggest trends in HPC and the hottest topics of SC22, starting with TACC’s “supercomputing saloon” styled booth. Asked to imagine himself king of HPC for a day, Stanzione shares his insights on composable computing, the risk of HPC getting left behind, the tension between choice and consolidation, and the promise of technologies like HBM and fiber optics. He also provides an update on TACC’s Leadership Class Computing Facility plans.

The interview can be viewed below and is also included (beneath the video) as a lightly edited transcript.


Tiffany Trader: Hi, I’m Tiffany Trader, managing editor of HPCwire. We’re here at SC22 in Dallas. And with me is Dan Stanzione, director of TACC, the Texas Advanced Computing Center. Like I said, we’re here in Dallas for SC, how far of a drive is it coming from TACC in Austin?

Dan Stanzione: Right now, I can make it in about two hours and 45 minutes; you throw in rush hour and double that. But, yeah, under three hours.

TACC booth at SC22

Trader: It’s kind of a special show. You’re still in Texas here. And you’ve got a really cool booth. You’ve reprised the booth from SC18, which was also here in Dallas. So how’d that come about? Did you have some of the things in storage? Or was it a recreation?

Stanzione: Yeah, a lot of it is in storage. They built a new frame, the doors are new, there’s a few things that we’ve changed out. But yeah, we designed that basic concept for the booth for SC18 when we were here in Dallas. We wanted to bring sort of Texas Old West with high tech. So we went with the sort of, you know, Supercomputing Saloon theme for it, high-tech saloon.

Tader: And what did you have going on there besides the cool mugs, and the cool t-shirts that were very much in demand?

Stanzione: Yeah those seem to have become legendary. I always enjoy – I go to conferences in Germany and watch people walking around with TACC shirts that I’ve never met before because they picked it up at Supercomputing, but, you know, hidden in the back of that booth with the big long tower, there’s a couple of conference rooms. So we had dozens of meetings this week in the booth that didn’t involve me walking across the street to hotels, so that’s always excellent. We had Robert McLay talk about his EXALT and LMOD products that are used all around the world for managing modules and figuring out what libraries get loaded on HPC systems and monitoring them. We did a fireside chat for a lot of people doing regional networking here in the area. But mostly, it was just a place to meet with people; we tried to make it open and inviting and a place people want to hang out. And we’ve had thousands of people through. And it’s great after a couple of pandemic years to get to sort of see everybody in the flesh again, without the structure of Zoom, where you can just chat with folks. It was fantastic.

Stanzione with James Reinders (Intel) at SC22

Trader: And during the opening night Gala, you had a chat with (Intels’s) James Reinders over at the Intel booth. Do you think they’ll have you back?

Stanzione: Well, that was the first of I think three times I spoke at the Intel booth this week, so they haven’t kicked me out yet. Next year is another matter. We’ll have to see what happens.

Trader: But it was a fun dialogue. What were your takeaways?

Stanzione: Yeah, so James and I just wanted to sort of chat about exascale software and where it was going and you know, both the direction Intel’s taking and the direction the industry seems to be taking. And, you know, I think what’s out there is sort of a pretty heavy C++ focus on the very largest exascale apps, but we still have tons of Fortran apps, tons of Python apps. I think one of the challenges we face as an industry is we’ve never really settled on the way to build HPC software. So we still struggle with that. But you know, given the reality on the ground, we have to look at multiple pathways to get people to use big machines. And the more we focus on this sort of one for exascale, the more people we’re going to leave out in the long run. Exascale is just the top of a very broad pyramid, right in HPC. At the moment. I think we talked about all the pathways to do that. And the ways we can make software better, I mean, for energy efficiency, right on almost any architecture, there’s a lot of room to go in making better software.


“I think what’s out there is sort of a pretty heavy C++ focus on the very largest exascale apps, but we still have tons of Fortran apps, tons of Python apps. I think one of the challenges we face as an industry is we’ve never really settled on the way to build HPC software. So we still struggle with that. But you know, given the reality on the ground, we have to look at multiple pathways to get people to use big machines. And the more we focus on this sort of one for exascale, the more people we’re going to leave out in the long run. Exascale is just the top of a very broad pyramid, right in HPC. At the moment. I think we talked about all the pathways to do that.”


Trader: Yeah, for sure. And the cost savings as well with regards to energy efficiency is a big factor. Yeah. So at TACC, you’ve got the follow on to the Frontier system coming up. That’s a continuation of the NSF leadership computing Award, which has been advancing toward the creation of this new Leadership Class Computing Facility, LCCF. (Dan: yes!) And it sounds like Intel might be a partner. And I know, you give hour long talks about the LCCF, but could give us some of the highlights?

A concept of the LCCF. Image courtesy of TACC.

Stanzione: Sure. Again, it will be the system after Frontera. So to sort of sustain the NSF community, it’s really a pivot in NSF strategy for the way they’re funding high performance computing because for the last 20 years through TeraGrid, XSEDE, now ACCESS, they’ve been sort of four year one-off system awards. I mean, we’ve done very well at that: Ranger, Stampede, Stampede 2, a bunch of other systems. But there’s been no sort of sustained commitment, right to either architecture or to sort of letting the users know what’s going to be out there, right. And you look at things like the big scientific instruments, you know, the James Webb Space Telescope, the Vera Rubin Telescope, the Ecological Observatory Network, right? They operate on decade, 20 year timescales for data, and saying, well, you can run here, but in a couple of years, this might go away, and there might be something else and there might not. Right, it’s really not a sustainable way to sort of build up – they’re all forced to build their own infrastructures because they just can’t count on anything being there, right. So we’re trying to pivot to a model where there’s predictable large systems. And we can make these sort of long-term partnerships with the big experiments that are really advancing science and society.

So we’ll be building bigger datacenters, of course. We’re adding another 15 megawatts of capacity to deal with those systems. And we’ll be building a big system to replace Frontera. And, yeah, Intel, you know, we’ve had a fantastic partnership with Intel through a bunch of systems. I point out Ranger was an AMD system. We deal largely with Dell recently, but Lonestar5 was a Cray, right? I mean, we will deal with wherever the best systems are for what we think our users see. But yeah, the idea is to get sort of 10x the scientific throughput out of the next one, you know, versus Frontera, the current one. So sort of over 10 years to go up about an order of magnitude. And we’re still two and a half years out on actually deploying the system, we’re about a year and a half from starting on the datacenter work. So I haven’t really made a final decision on what it looks like yet, but you can imagine who the main contenders are, and what the options are to put things together. And it really is, where we think our users are on software, and how fast they can adapt to changes that are coming.

Trader: And the ecosystem right now, there’s more choices out there, you know, to look at and assess.

Stanzione: Yeah, well, in some ways, there’s more choices. And in some ways, there’s sort of consolidation, right, as we come off a decade and a half of sort of mix and match, right, you pick your, you know, maybe you use Intel, for your CPUs, you use Nvidia for your GPUs, you went to Mellanox, or somebody for your network. And now what we see is sort of consolidation, right, where there’s sort of the AMD ecosystem, the Nvidia ecosystem, the Intel ecosystem. In some sense, that simplifies some decisions, but it’s harder to sort of, you know, you sort of have to buy the formula instead of the sort of cookie cutter approach to putting things together. And in some ways, it’s given us less choices, and made some of those decisions harder.


“[I]n some ways, there’s more choices. And in some ways, there’s sort of consolidation, right, as we come off a decade and a half of sort of mix and match, right, you pick your, you know, maybe you use Intel, for your CPUs, you use Nvidia for your GPUs, you went to Mellanox, or somebody for your network. And now what we see is sort of consolidation, right, where there’s sort of the AMD ecosystem, the Nvidia ecosystem, the Intel ecosystem. In some sense, that simplifies some decisions, but it’s harder to sort of, you know, you sort of have to buy the formula instead of the sort of cookie cutter approach to putting things together. And in some ways, it’s given us less choices, and made some of those decisions harder.”


Trader: Right, so I’m gonna skip over that entire question. Because you’ve answered that. But you know, I’ll dive into a little bit more you know, with these swim lanes, we’re getting some interesting combinations. And one of the combinations is the Intel Max series. They kind of drew a horizontal line at the top of their stack with the Max series CPU, aka Sapphire Rapids plus HBM. And then the Ponte Vecchio GPU, which also has HBM. So I was kind of thinking, it might be interesting to see what use cases you get combining the HBM with the HBM.

Stanzione: I’ve been really excited about HBM for a long time. It’s a sort of a shame that it’s taken so long to get some of this to market. And there’s been a lot of complications with just sort of integrating it, stacking it on the die, and making all of that work. But we know, I think – really, every large operator knows that we’re memory bandwidth bound in a lot of cases, right. And that’s one of the reasons, it’s not the only reason, but it’s *a reason* that GPUs have done so well, in the last few years is, there’s a lot more memory bandwidth per operation than you get out of a CPU with traditional DIMM based memory, right, because the GPUs have had not always HBM, but they’ve had, you know, the stacked fast memory, GDDR, before, that’s given them a huge bandwidth advantage over the CPUs. And with HBM, we can start really comparing – if we take memory bandwidth out of the equation – you know, what’s best in a CPU sort of cache-based architecture, what’s best in a GPU streaming core type architecture. But really balancing that better I think is going to be a huge help.

And you know, we’ve tested some HBM chips, now that they’re starting to see samples of mainstream chips, we got the Fujitsu ones, they put in Fugaku a couple of years ago, and saw some interesting stuff there. But we’re seeing, you know, on the same Sapphire Rapids cores at the same clock rate, we see a huge leap forward in application performance, when you integrate HBM. Of course, you’re trading off some power for that HBM, you’re trading off capacity, right, because we do an all HBM chip, it’s really expensive to do HBM and put main memory in, so you’re probably going to do one or the other. So you have to fit everything into a smaller memory, which is a trade off. But a lot of times we see application performance jump by 50% or more. Which, you know, when you’re weighing how many CPUs, how many GPUs, 50% better CPU performance changes the equation some. And then yes, having it on both sides of the system, that much faster memory, you start to see, you know, you’re no longer limited by how much you can fit in GPU RAM as you can. Both the HBM and then the faster connectivity that we see with NVLink and the other fabrics coming out to put things together. You can think about bigger models so that don’t have to squeeze into that GPU RAM. So I think that’s a pretty exciting development. It’s really expensive right now, but if we can get it to work, you know, yield reliably at high levels, I think it could be a game changer.


“But we’re seeing, you know, on the same Sapphire Rapids cores at the same clock rate, we see a huge leap forward in application performance, when you integrate HBM. Of course, you’re trading off some power for that HBM, you’re trading off capacity, right, because we do an all HBM chip, it’s really expensive to do HBM and put main memory in, so you’re probably going to do one or the other. So you have to fit everything into a smaller memory, which is a trade off. But a lot of times we see application performance jump by 50% or more.”


Trader: And continuing the theme of architectural decisions and future directions. Jack Dongarra was here. Top500 list author and inventor of the Linpack benchmark, the High Performance Linpack benchmark, and newly minted Turing winner, has been saying that HPC is facing a crisis driven by the almost absolute reliance on commodity hardware. COTS – commodity off the shelf. And there’s this phrase: Reinventing HPC. There’s a panel with Jack, and Dan Reed and others. So do you think HPC needs to be reinvented? And if you were king of HPC, what would you dictate?

Stanzione: Yeah. So that’s an interesting question. It was a fascinating discussion. And by the way, congratulations to Jack on the Turing Award, richly deserved so many contributions over such a sustained period. So I work with Jack a lot. Congratulations, Jack, if you’re listening. I got to tell him that in person this week. But yeah, there were some really interesting things. I think, you know, the drivers – when we talk about commodity off the shelf, right, that sort of started with the Beowulf project 30 – it will be 30 years next year, right, 29 years ago. NASA Goddard, Thomas Sterling, Don Becker. Right. And the economics then, and I think it’s still the economics now is supercomputing is not a big enough market, versus the cost of a fab. Right. We’ve talked about HBM and fabrication, right, new fabs, you see the CHIPS act, right? It takes $20 billion to bring a new fab online, right? It’s a huge number. It’s the whole HPC marketplace for a year. So the notion that we’re gonna have fully custom silicon for HPC, you know, it just isn’t there. I firmly believe that any component we’re going to get at scale, on the chip side, is either what the cloud guys want, or something the cloud guys would buy. Because if it’s not going to be used by the big clouds, there’s not going to be enough of them to make it cost effective. So I think that part of the off the shelf argument still holds. At the same time, I do think HPC really needs to evolve. You know, we’ve had so many conversations where, for a community where we’re building the very biggest machines in the world to tackle the hardest science problems, we can be awfully conservative sometimes, right, about making changes. You know, fundamentally, we still program in MPI, OpenMP as we have for 30 years, right? We are very slow to change architectures. So certainly there’s a bounce back and forth between flavors of x86. But really, CPU plus GPU is the only thing we really achieved in 25 years. I think Torsten Hofler made a great point on that panel, that really, the network is where we’re innovating, right. That’s the thing that’s not like the cloud, right, in many ways. Although, with the rise of AI, I think the cloud guys want to be more like that, and they’re gonna have a huge impact on networks coming up. But we need to have, you know – personally, I believe the long-term future, we’ll see semi-custom silicon. You see, both Intel and AMD each spent about $40 billion on FPGA companies, right. And you can imagine a world where there’s no custom cores for HPC, but we can tweak existing designs from a library of parts to say, I need a few more vector units, a few more accelerators on a semi-custom chip with sort of stock IP that’s built. But I think we need to feel like we can explore and do more experimentation in that area, and not just do more of the same or we will get left behind by – there’s a huge amount of innovation going on in the cloud to deal with very, very large data analysis, very large AI problems. You know, they’re not as good at the modeling and simulation stuff, but I think if we don’t evolve, we would get left behind. Absolutely.


“I firmly believe that any component we’re going to get at scale, on the chip side, is either what the cloud guys want, or something the cloud guys would buy. Because if it’s not going to be used by the big clouds, there’s not going to be enough of them to make it cost effective.”


Trader: So there’s different architectural directions you can bring everything close together, or you can do this thing that people are talking about composable. Yeah, disaggregated we see some examples exactly of it but it hasn’t you know, reached its full fruition. You were on a panel called Composability Smackdown, where it was kind of a setup where people were chosen for the “for” and “against” sides, like a high school debate kind of thing, but more fun. You were on the for-side, the pro-side, can you revisit some of your arguments in favor of composability and then steelman some of your opponents’ arguments.

Stanzione: Sure, absolutely. Because you know, we did that for fun, right? And we were all prepared to argue either side if we needed to in those arguments. And it was definitely a fun thing to do. But it is an example of what I’m just talking about, you know, the need to evolve the need to experiment in terms of how we put things together. And, you know, arguably, there are – well, I think the biggest “for” argument, right – and it may not apply to the very largest exascale systems at this point, but if you were to walk around the floor here, we’re surrounded by hundreds of people selling HPC products, right? And if you talk to each one of them, and all the users wandering around about their applications, if you tried to get an answer to what is the right number of GPUs per node, you could talk for days, you will get many conflicting answers. And in the end, it will come down to the answer. It depends, right. It depends on your workload, depends on how tightly coupled they are, it depends on how much memory you need a whole bunch of things, right? If you went around and asked the same question about how much local storage you should put in a node, you’ll get the same range of answers. And it’ll end up being, it depends. And perhaps the same for the amount of memory you have in a node, right? People who are like, well, we should go small HBM, no DIMMs because everything can fit in a gigabyte per core. And there are others like, no, we have applications that are 64 gigabytes a core, right? So there’s no single solution for hardware for that. And if there’s no single solution, then I think composability has the opportunity to make sense in what we do. And I think that’s the biggest “for” argument is, you know, if you’re buying not thousands of nodes, where you have enough infrastructure – I mean, if you’re the cloud, you just buy 100 more racks of whatever it is, right? And you see in the cloud, they have literally hundreds of instance types now, right, of different combinations of hardware and nodes. And they only have hundreds of instance types, because they have customers who want each one of those types, right. But if you’re building a departmental scale cluster, you can’t do hundreds of different configurations, buying static servers, right, you’re going to have to do some form of composability to make that happen. And I think that’s the pro argument. I think the con argument, which has a lot of compelling parts, because not everything wins in technology just because it’s cool, and in fact, doesn’t always win because it’s best, it’s because it’s economically effective and usable, right. So the risk with composability, the most fundamental one is almost anything you do where you’re taking something tightly coupled and breaking it apart, you’re adding latency, right? And with CXL evolving standards and stuff, it’s possible that latency can come way down from what it’s been, and it may be good enough, but you could be adding latency. And that, you know, potentially slows things down the opposite of what we normally try to do. So that’s a problem. The other one is the usability argument. I personally think we can hide a lot of the complexity from the users. But your system staff need to know how to do that. Right. So they’re going to have and feel some of that complexity. But again, I think our system staff need to learn to evolve and not do exactly the same thing. So there, there are definitely trade-offs there, it’s not clear there’s a winner. So I would say we’re making some investments at the experimental scale because I don’t know the answer to these questions, and we need to find out, right, and so turning a blind eye to it just because well, maybe it’ll add latency. You know, we’ve run GPUs over PCI very effectively for the last decade and a half; we can get less than that latency in a composable system, how is that not useful?

Trader: Do you think that the infrastructure will be moving towards embracing on chip optical and silicon photonics? And what do you think the timeline might be for that?

Stanzione: So I’m not sure of the timeline. All of these things, you know, the physics is there, the technology is kind of there. There’s a lot of engineering challenges, and like we’ve seen with HBM, right, the little things of, well, when we start stacking dies, we have to think about the joint failure rate of those coming together, and how can we get that down low enough. It may stay expensive for a few more years, right, and not come to pass… It already hasn’t come to pass as fast as I think people thought it would. I don’t work close enough on it to you know, really have a sense of it’s going to be this timeline. But I think it’s important that it happens because we spend an awful lot of power getting a signal off the chip onto the motherboard, you know, making the trace big enough, need enough fan out from the transistors to talk to the trace and then getting it out to another digital electro optical converter. If we can just send the photons right off the chip, we’re gonna save a lot of power, and we’re gonna save some latency. And I think that that’s going to really help to be able to compose blocks together when we can have these you know, fiber optic speed, you know, connections chip to chip, perhaps at the board level or across very small boxes while we’re composing modules.

Trader: One of the people who was on the against side was Ruth Marinshaw from Stanford. She said something to the effect of her staff doesn’t have the capacity to manage anything else or any more complexities? You’re at a well resourced center. What would your response be to that kind of comment?

Stanzione: So I’m sensitive to that. And by the way, Ruth just walked by where we were talking. But I think she’s out of earshot now. I mean, everything we do is a challenge for staff. And we do ask an awful lot of system staff. I think it’s important that we, you know, continue to work as several organizations are doing to sort of professionalize the role of that staff and recognize the contributions that they make to the research process. You know, we’re not really IT; we’re building custom research instruments that happen to use IT components, right. And so there’s often a sort of the, the look at all center staff to be, you know, sort of service people who you’re supposed to be, you know, seen and not heard, perhaps so. And I do think we need to change that culture and recognize how hard they work and what their contributions are. But at the same time, again, they’re going to have to evolve, right? We’re starting to – we’re seeing more object stores, right? We’re seeing – running InfiniBand is a lot harder than running Ethernet. We did it anyway. Because to make commodity clusters work, it was important to have low latency networks. I think this is going to fall in the same category.

Trader: Well, we’re back here in Dallas, four years after the last one. And next year, we’re going to be in Denver, it’s going to be the 35th anniversary. They have 11,000 attendees here. In St. Louis (SC21) it was about 3,500. So it (SC) didn’t incrementally come back, it just kind of bounced back to the regular water level of attendance, after you know, just meeting virtually. So what’s it been like for you to be here back at SC and what have some of the highlights been for you?

Stanzione: It’s tremendous to be back in person. I mean, last year, there were still plenty of legitimate pandemic concerns. I didn’t go last year. So this year, yeah, I think we’re back at maybe 90%. Right. So there’s still I think most of the issues are not really pandemic related. It’s, you know, Visa backlogs and stuff like that getting people here.

Trader: A third of the people here are international [ed note: exhibitors], actually.

Stanzione: So some got in, but you see less from China perhaps and less from a few countries that we used to. But, you know, so much of the interaction that happens at SC and really any conference is the stuff that’s not in the formal program. Right? And I really missed that. I mean, first of all, there’s the rumor mill and all the information that spreads by catching up with colleagues and seeing what’s going on with them, a chance to chat with you in person. It’s a different experience, right? So, you know, I think the conference did a great job – and many conferences – you know, being remote, and then hybrid – hybrid’s a lot harder than remote – where you still got everything you would get out of sitting in the sessions, but you didn’t get everything else. And this year, we get everything else back. And it’s been a great experience.
Trader. Yeah, I like all the connections we make, and all the information we share, and then just those serendipitous moments that happen are real fun.

Stanzione. Absolutely.

Trader: So we’ll be there in Denver next year. SC23. Dorian Arnold is the chair and I look forward to seeing you there.

Stanzione: We’ll be there. Looking forward to it again.

Trader: Thanks for watching. We’ll see you next time.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Touts Strong Results on Financial Services Inference Benchmark

February 3, 2023

The next-gen Hopper family may be on its way, but that isn’t stopping Nvidia’s popular A100 GPU from leading another benchmark on its way out. This time, it’s the STAC-ML inference benchmark, produced by the Securi Read more…

Quantum Computing Firm Rigetti Faces Delisting

February 3, 2023

Quantum computing companies are seeing their market caps crumble as investors patiently await out the winner-take-all approach to technology development. Quantum computing firms such as Rigetti Computing, IonQ and D-Wave went public through mergers with blank-check companies in the last two years, with valuations at the time of well over $1 billion. Now the market capitalization of these companies are less than half... Read more…

US and India Strengthen HPC, Quantum Ties Amid Tech Tension with China

February 2, 2023

Last May, the United States and India announced the “Initiative on Critical and Emerging Technology” (iCET), aimed at expanding the countries’ partnerships in strategic technologies and defense industries across th Read more…

Pittsburgh Supercomputing Enables Transparent Medicare Outcome AI

February 2, 2023

Medical applications of AI are replete with promise, but stymied by opacity: with lives on the line, concerns over AI models’ often-inscrutable reasoning – and as a result, possible biases embedded in those models Read more…

Europe’s LUMI Supercomputer Has Officially Been Accepted

February 1, 2023

“LUMI is officially here!” proclaimed the headline of a blog post written by Pekka Manninen, director of science and technology for CSC, Finland’s state-owned IT center. The EuroHPC-organized supercomputer’s most Read more…

AWS Solution Channel

Shutterstock 2069893598

Cost-effective and accurate genomics analysis with Sentieon on AWS

This blog post was contributed by Don Freed, Senior Bioinformatics Scientist, and Brendan Gallagher, Head of Business Development at Sentieon; and Olivia Choudhury, PhD, Senior Partner Solutions Architect, Sujaya Srinivasan, Genomics Solutions Architect, and Aniket Deshpande, Senior Specialist, HPC HCLS at AWS. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1453953692

Microsoft and NVIDIA Experts Talk AI Infrastructure

As AI emerges as a crucial tool in so many sectors, it’s clear that the need for optimized AI infrastructure is growing. Going beyond just GPU-based clusters, cloud infrastructure that provides low-latency, high-bandwidth interconnects and high-performance storage can help organizations handle AI workloads more efficiently and produce faster results. Read more…

Intel’s Gaudi3 AI Chip Survives Axe, Successor May Combine with GPUs

February 1, 2023

Intel's paring projects and products amid financial struggles, but AI products are taking on a major role as the company tweaks its chip roadmap to account for more computing specifically targeted at artificial intellige Read more…

Quantum Computing Firm Rigetti Faces Delisting

February 3, 2023

Quantum computing companies are seeing their market caps crumble as investors patiently await out the winner-take-all approach to technology development. Quantum computing firms such as Rigetti Computing, IonQ and D-Wave went public through mergers with blank-check companies in the last two years, with valuations at the time of well over $1 billion. Now the market capitalization of these companies are less than half... Read more…

US and India Strengthen HPC, Quantum Ties Amid Tech Tension with China

February 2, 2023

Last May, the United States and India announced the “Initiative on Critical and Emerging Technology” (iCET), aimed at expanding the countries’ partnership Read more…

Intel’s Gaudi3 AI Chip Survives Axe, Successor May Combine with GPUs

February 1, 2023

Intel's paring projects and products amid financial struggles, but AI products are taking on a major role as the company tweaks its chip roadmap to account for Read more…

Roadmap for Building a US National AI Research Resource Released

January 31, 2023

Last week the National AI Research Resource (NAIRR) Task Force released its final report and roadmap for building a national AI infrastructure to include comput Read more…

PFAS Regulations, 3M Exit to Impact Two-Phase Cooling in HPC

January 27, 2023

Per- and polyfluoroalkyl substances (PFAS), known as “forever chemicals,” pose a number of health risks to humans, with more suspected but not yet confirmed Read more…

Multiverse, Pasqal, and Crédit Agricole Tout Progress Using Quantum Computing in FS

January 26, 2023

Europe-based quantum computing pioneers Multiverse Computing and Pasqal, and global bank Crédit Agricole CIB today announced successful conclusion of a 1.5-yea Read more…

Critics Don’t Want Politicians Deciding the Future of Semiconductors

January 26, 2023

The future of the semiconductor industry was partially being decided last week by a mix of politicians, policy hawks and chip industry executives jockeying for Read more…

Riken Plans ‘Virtual Fugaku’ on AWS

January 26, 2023

The development of a national flagship supercomputer aimed at exascale computing continues to be a heated competition, especially in the United States, the Euro Read more…

Leading Solution Providers

Contributors

SC22 Booth Videos

AMD @ SC22
Altair @ SC22
AWS @ SC22
Ayar Labs @ SC22
CoolIT @ SC22
Cornelis Networks @ SC22
DDN @ SC22
Dell Technologies @ SC22
HPE @ SC22
Intel @ SC22
Intelligent Light @ SC22
Lancium @ SC22
Lenovo @ SC22
Microsoft and NVIDIA @ SC22
One Stop Systems @ SC22
Penguin Solutions @ SC22
QCT @ SC22
Supermicro @ SC22
Tuxera @ SC22
Tyan Computer @ SC22
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire