IBM Unlocks the Cell

By Nicole Hemsoth

September 15, 2006

Last week, the DOE's National Nuclear Security Administration selected IBM to design and build the world's first supercomputer that will use both Cell Broadband Engine (Cell BE) processors and conventional AMD Opteron processors. The petaflop machine, code-named Roadrunner, is scheduled to be deployed at Los Alamos National Laboratory sometime in 2008. This not only represents IBM's first supercomputer containing Cell processors, but it also signifies the company's first large-scale heterogenous system deployment.

HPCwire got the opportunity to talk with David Turek, vice president of Deep Computing at IBM, about the new system. In this extended interview, Turek reveals IBM's strategy behind the Roadrunner platform and how it fits into the company's supercomputing plans. He also discusses IBM's overall approach to hardware accelerators and heterogeneous computing.

HPCwire: What is the significance of the Roadrunner deployment? Is it a one-off system or does it represent the start of a new line of IBM supercomputers?

Turek: The significance of Roadrunner is that this is our preferred architectural design for the deployment of Cell in the HPC application arena. To be clear, we have no plans to build a giant cluster just out of Cell processors. Instead we think the Roadrunner model is the correct model which employs Cell as an accelerator to a conventional microprocessor-based server.

Over the course of time, we expect accelerators to become a key element to our overarching strategy. So the work that we do here is designed, in particular, to be sufficiently general to encompass a variety of models on how accelerators might be deployed.

Our intention with respect of a more broadly propagated version of Roadrunner is an assignment we've given ourselves for the fall to see exactly how far this can be extended and how deeply it can be played in the marketplace. We've got to resolve programming model issues. Secondly, the early Cell deployment is based on single precision floating point; that's going to go to double precision [for the final deployment]. So there's work to be done here to see exactly how this plays out.

In a sense this is no different than our launch of Blue Gene, which nominally was targeted to a very narrow set of applications, but which over the course of time demonstrated much broader utility. And if you go back still further in time, when we launched the SP system back in the 90s, we viewed that as a more niche product; and that too became more broadly deployed.

So this is an addition to our portfolio. It is not meant to displace or replace anything. We just think that the diversity of application types are such that there will be a need for a broader portfolio rather than a narrower portfolio.

HPCwire: Are you looking at other accelerator devices besides Cell?

Turek: Always. Our technology outlook is pretty broad. We're looking at trends several years in the future. So we've been looking at a variety of schemes for acceleration, and it goes beyond just looking at the conventional idea of using an FPGA for an accelerator — which, by the way, we don't think is a good idea. And it goes as far as us beginning to think about system level acceleration as it applies to workflow, as opposed to process level acceleration as it applies to specific applications.

Let's look at process-level optimization and application decomposition and see how that maps to these kinds of models of acceleration that are embodied in Roadrunner. We know that a lot people will experiment and use accelerators. We can't be specific about what they'll all look like over the course of time. But we think that if we get the programming model right, it should be extendable to cover a more diverse range of accelerator [architectures].

So, for all the right reasons, we're extraordinarily proud of Cell and we think it has a huge opportunity to make a terrific impact in a variety of market segments. But we're not blind to the fact that other people can or have developed accelerator technologies.

HPCwire: While the Cell architecture certainly has generated a lot of interest in the HPC community, some of the people I've talked to have expressed doubts about the suitability of Cell for mainstream scientific and technical computing.

Turek: That's why I drew this stark distinction at the beginning about our plans to just build a Cell-based cluster. Because when I think when you talk to people and you ask the question the way you posed it, many people will naturally make the assumption that we're going to have a system entirely based on Cell processors and that's it. And I think that under that scenario we would agree — that would be a bit of a stretch. But on the other hand, with a lot of thoughtful analysis over many months, both internally and in collaboration with the teams at Los Alamos (as we got involved in responding to the RFP), we thought that this notion of deployed Cell as an accelerator to a conventional architecture was a better way to go.

HPCwire: You said that the final deployment of Roadrunner will incorporate a double precision floating point implementation of the Cell processor. What will you be accomplishing in the early stages of Roadrunner that uses the single precision version of Cell?

Turek: The early deployments of Cell are really meant to help us deploy and debug all the software tools and the programming model. All that gets preserved regardless of whether you're single or double precision. And then as we go down the path of producing the double precision Cell B.E., that will be more a matter of deployment and scaling issues than it will be to the specification programming models, software tools and things of that sort.

HPCwire: On a related topic, are you interested in the work Jack Dongarra is doing with the Cell, using single precision hardware to provide double precision math in software [see Less is More: Exploiting Single Precision Math in HPC http://www.hpcwire.com/hpc/692906.html]?

Turek: Absolutely. We talk to Jack all the time about this. I think we may experiment with it or have our other Cell collaborators experiment with it — if Jack's OK with that. We consider the work Jack is doing to be very, very important as is the work of all of our other collaborators. By the way, there are many such individuals, spread across many universities around the world.

So we'll talk to Jack and look at that pretty seriously. If we all have a meeting of the minds about how to begin to deploy this, we will let clients like Los Alamos or maybe others make use of that technology. Absolutely, we will do that.

HPCwire: You mentioned you're not really interested in FPGAs as accelerators. Why is that?

Turek: Because they're really hard to program and they're pretty expensive, relatively speaking. We think they're really good for prototyping. But we believe a better model is to put that [functionality] into a custom ASIC or something else. I'm not convinced that the software tools and the other things you need for programming them will ever make it, fundamentally. But I think a model built on custom ASICs or things like Cell, which can take advantage of conventional high-level programming languages and compilers, etc. (and yes there's work to be done here on programming models), is probably going to a more effective way to get those kinds of speedups that are nominally associated with strategies of acceleration.

I mean if you look, for example, at the XD1 system that Cray offered, I don't think there is much uptake in the market for that technology. I think the utilization of FPGAs in that was probably fairly scant — you'd have to talk to Cray about that and get some facts on it. There's clearly been more interest from companies talking about things like ClearSpeed [co-processors].

HPCwire: How do you envision applications will be deployed on Roadrunner?

Turek: The design of Roadrunner can be looked at in a couple of different ways. First of all, by having a very large Opteron cluster as kind of the workhorse part of the system, one could choose just to deploy applications quite conventionally on that cluster to achieve the expected benefit. The second thing is that the system has flexibility by the deployment of Cell processor as accelerators, in conjunction with the Opteron cluster, which gives you something like a “turbo-boost” on applications that are capable of exploiting the acceleration. So with Roadrunner, you have choices. You can deploy application conventionally — read that as MPI — and then you can marry that with a model that uses library calls to give you access to the compute power of the Cell.

HPCwire: Roadrunner is described as containing 16,000 Opteron processors and 16,000 Cell processors. What's the significance of the one-to-one ratio of Opterons to Cells?

Turek: So, I'll be the first to say that we don't know everything. I think that all these ratios are going have to be explored in more detail. Right now, for example, when you look at the Cell processor, it's one conventional processing engine and eight SPEs. Well, you could ask the same question there. Is that the right ratio? I think that it's premature on the part of anybody to be declarative on this topic.

In the context of the Los Alamos application, we've been thoughtful that this is the right plan. Do we think that there's no evidence in the world that would cause us to move away from this? Clearly not. I think as we get into deeper stages of development, both in software and deployment of hardware, and start running real applications (as opposed to running simulations), we're bound to learn something. And I will tell you that if what we learn says you need to tweak this a bit and go this way instead of that way, then we will absolutely do that to give our client the best possible performance.

HPCwire: Is the Blue Gene technology heading for petaflops in its roadmap as well?

Turek: I think the natural progression of what we're doing on these platforms is clearly to anticipate multi-petaflop systems down the road. So sure, if you look at Blue Gene today, the only thing that separates you from the deployment of a petaflop system is money. The future designs factor in a whole lot of other things — not only how you make a petaflop affordable, but also how do you open the aperture to an enhanced set of applications. Basically, this is a reflection upon the experience that we, along with our collaborators, have had over the past year and a half with Blue Gene. And you make adjustments along the way. So do we have an intention to drive the Blue Gene map forward? Absolutely.

And it's not at all in conflict with what we're doing here with Roadrunner because they're different programming models. For us, that's a key point of differentiation. Right now it looks like they may serve different application sets differently. For us that's fine.

We've never been strong believers in the notion that high performance computing, as a market segment, is homogeneous, or by implication, that the applications that characterize it, are homogeneous. And I think that's partly caused by the fact that when we talk about high performance computing, we expand it to include applications that you'll find in financial services, digital media, business intelligence, etc. So we probably have a broader conceptualization of the marketplace than some of the niche players may have. As a result, it conspires to cause us to have a broader portfolio than some of those players might have.

HPCwire: With that in mind, what kinds of application differentiation do you see between Roadrunner and Blue Gene?

Turek: Clearly, the Roadrunner represents a bigger memory model than Blue Gene. But it also has a different kind of programming model. Today for example, MPI applications, in almost 100 percent of the cases, are capable of being ported to Blue Gene, usually within a day, with reasonably good performance. Tuning, we've discovered, takes maybe another two to five days to get really outstanding performance. With respect to the Roadrunner model, that's going to be a bit different because of the way that system is architected. We'll reveal more details about the Roadrunner APIs down the road; it's a little premature to do that now. We'll go public with that sometime this fall, for sure.

There are a lot of things that we can do in regards to mapping applications to the SPEs on the Cell processor. And there's a lot we can do in the evolution of the Cell processor. So for us this is just another integral part of our portfolio that we've got to sort out in the context of our existing technologies, mapped against how we see the development of different market segments. I can understand a small company or niche company saying “Well, IBM has two, three or four things, whatever the case may be.” But our view is that it's a big market that is intrinsically diverse and it's actually what is required if you are really committed to serving the needs of your clients.

Consider really good scale-out applications, for example Qbox, which right now operates at 207 teraflops sustained on Blue Gene at Livermore. Are you going to get better performance if you port it and tune it to Roadrunner? My guess is probably not. And the reason for that is that the architecture of the Qbox application is something that does really well with the kind of memory subsystem characteristic of Blue Gene as well as the scale-out aspects of the networks in Blue Gene. For example Roadrunner doesn't have the multiple network model that Blue Gene has. And as a result there are applications where the scalability won't be there. The important thing, though, is that in the context of the applications that are characteristic of Los Alamos, there is a high degree of confidence that the design of Roadrunner is actually more appropriate for those applications than alternative architectures.

So this brings me back full circle. You have to let the algorithms and the applications dictate the nature of the architectures you deploy.

HPCwire: Your Roadrunner “Hybrid Programming” software model sounds similar to Cray's “Adaptive Computing” vision. How would you compare the two?
 
Turek: Well, ours is real.
 
HPCwire: In what sense?
 
Turek: It exists. We're working on it. The APIs are defined. The programming is underway. We're committed to it as an important and strategic element of what we're doing.

It's hard for me to comment on the “Adaptive Computing” model from Cray. I guess it was meant to be some of universal solution, encompassing a broad range of architectures, all under one roof — scalars, vectors, FPGAs, etc. I don't know how that all works. So I would it say it was more a statement of intention rather than a development plan.

With respect to the contract we signed with Los Alamos, we have a development plan. It's outside of the stage of intention. So when I say it's real, I mean the corporation has committed itself to execute on this and it will get done. It's different than making a speech and outlining a vision.

As far as I know, no one has signed a contract with Cray for an “Adaptive Computing” implementation. I don't know how to comment on its existence other than it's a statement of intent. With respect to Roadrunner, we have a contract with deliverables that start this fall. So I know that is concrete and real. And we're committed to it. That is the difference between “easy to say” and “hard to do.” By the way, we're not paying attention to what Cray is doing here. We have a keen understanding of the architectural needs embodied in Roadrunner and we're executing on that in the context of a pretty diverse application portfolio, which we think will help generalize what's embedded in the Roadrunner APIs. That's what we have to worry about; we don't need to worry about the musings of what someone might do sometime in the future.

[Editor's note: See Cray's response to these remarks below.]

HPCwire: You said that the Los Alamos deployment would begin in the fall. Do you think you'll be demonstrating something Roadrunner-like at the Supercomputer Conference in November?

Turek: I wouldn't be surprised. But remember, what we're talking about for 2006 will be heavy on the Opteron deliveries and lighter on Cell because we'll be focusing on the development of the programming model rather than on Cell performance. So in the context of doing demos and getting the “gee golly” kind of attention, I'm not sure that's what we'll be looking for at Supercomputing. I mean we've run demos for some time now at Supercomputing with Cell. And if you show the right visualization applications, people say “Wow, this is pretty cool.” There are going to be a lot of things coming out this fall that are going to demonstrate that Cell is pretty cool. But I think we will do something at Supercomputing and it's going to open the eyes of a lot of people.

HPCwire: Do you think the reaction to this new technology will be different from that of Blue Gene when it first started?

Turek: You've got to remember, two years ago, there were a lot of people in the industry that pooh-poohed Blue Gene. They said: “The microprocessor is not fast enough, there's not enough memory and here's all the things it can't do.” And every time somebody said that to us or one of our clients, we put a little attention on it and without any dramatization, we said “No it really can do these things.”

I would characterize our activities on the Roadrunner project as being entirely pragmatic and empirical. We're moving away from discussions of theory, speculation and vision. So we're just going to build the damn thing and see what it really does.

We've committed a lot of resources to the government to do this and we're going to do everything we can to make it a success. But personally, I'm not going to pay a lot of attention to people sitting on the sidelines giving me theoretical reasons why it won't be good or it can't work or what have you. We paid attention to that in Blue Gene and it turned out that most of those people sitting on the sidelines didn't know what they were talking about. We'll let the facts speak for themselves.

—–

In response to David Turek's remarks about Cray's Adaptive Computing vision, Jan Silverman, Cray senior vice president for corporate strategy and business development, responds:

“Industry experts that have been following Cray's product roadmap and Adaptive Supercomputing vision are aware of both our plans and progress to date – and understand that what Cray is doing is 'real.'

“Cray's Adaptive Supercomputing Vision, which we are implementing through a long-term collaboration with AMD and other technology partners, is exciting to customers and is progressing on schedule. The implementation strategy is to develop, in stages now through 2010, supercomputing products that increasingly adapt to applications by applying the optimal processor type to each application, or portion of an application. These systems will also be more productive, easier to program and more robust than any contemporary HPC system.

“Cray is uniquely qualified to execute on our Adaptive Supercomputing vision, because we have systems in the marketplace today with four processor types (AMD Opteron microprocessors, vector processors, multithreaded processors, FPGAs). We plan to deliver all of these processor capabilities into a single, tightly coupled system by the end of 2007. After 2007, we will add many more advances to make our Adaptive Supercomputing platform adapt to applications more transparently.

“The decision by the DOE Office of Science and Oak Ridge National Laboratory to award Cray the world's first order for a petascale supercomputer was influenced by their excitement about our Adaptive Supercomputing vision and their confidence in our ability to achieve it on time. NERSC, which recently returned to Cray as a customer with an initial order for a 100-teraflop system, is also enthusiastic about Adaptive Supercomputing.

“Cray looks forward to providing HPC users with Adaptive Supercomputing systems; IBM and others seem to be following Cray's lead by recognizing the importance of complementing industry-standard microprocessors with other types of processors. We consider this another proof point that the path Cray's R&D organization has been actively pursuing is the right one.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

At SC19: What Is UrgentHPC and Why Is It Needed?

November 14, 2019

The UrgentHPC workshop, taking place Sunday (Nov. 17) at SC19, is focused on using HPC and real-time data for urgent decision making in response to disasters such as wildfires, flooding, health emergencies, and accidents. We chat with organizer Nick Brown, research fellow at EPCC, University of Edinburgh, to learn more. Read more…

By Tiffany Trader

China’s Tencent Server Design Will Use AMD Rome

November 13, 2019

Tencent, the Chinese cloud giant, said it would use AMD’s newest Epyc processor in its internally-designed server. The design win adds further momentum to AMD’s bid to erode rival Intel Corp.’s dominance of the glo Read more…

By George Leopold

NCSA Industry Conference Recap – Part 1

November 13, 2019

Industry Program Director Brendan McGinty welcomed guests to the annual National Center for Supercomputing Applications (NCSA) Industry Conference, October 8-10, on the University of Illinois campus in Urbana (UIUC). One hundred seventy from 40 organizations attended the invitation-only, two-day event. Read more…

By Elizabeth Leake, STEM-Trek

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing components with Intel Xeon, AMD Epyc, IBM Power, and Arm server ch Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU)... Read more…

By Doug Black

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Help HPC Work Smarter and Accelerate Time to Insight

 

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19]

To recklessly misquote Jane Austen, it is a truth, universally acknowledged, that a company in possession of a highly complex problem must be in want of a massive technical computing cluster. Read more…

SIA Recognizes Robert Dennard with 2019 Noyce Award

November 12, 2019

If you don’t know what Dennard Scaling is, the chances are strong you don’t labor in electronics. Robert Dennard, longtime IBM researcher, inventor of the DRAM and the fellow for whom Dennard Scaling was named, is th Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU)... Read more…

By Doug Black

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researchers of Europe’s NEXTGenIO project, an initiative funded by the European Commission’s Horizon 2020 program to explore this new... Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This