AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

By Tiffany Trader and John Russell

August 13, 2019

Editor’s Note: Twelve years ago the Department of Energy (DOE) was just beginning to explore what an exascale computing program might look like and what it might accomplish. Today, DOE is repeating that process for AI, once again starting with science community town halls to gather input and stimulate conversation. The town hall program is being led by a trio of distinguished DOE scientists – Rick Stevens (Argonne National Laboratory), Kathy Yelick (Lawrence Berkeley National Laboratory, and Jeff Nichols (Oak Ridge National Laboratory). HPCwire’s coverage of the first AI Town Hall, held at ANL, can be found here. 

With the Exascale Initiative even its name gives a hint to its goal. Scale. The expectation was and is to boost computing power such that important problems currently out of reach of petascale computing can be tackled. Without diminishing Exascale’s immense potential impact, it’s worth noting that the problem-solving approaches are generally the same simulation and modeling techniques used for decades. AI promises something different. Scale is certainly an element, particularly in terms of handling massive datasets, but AI also brings new methodologies and new computing perspectives to science. What does that really mean? Well for one thing, perhaps surprisingly, it may mean a significant place for experimentalists who previously were relegated to hovering around the edges of HPC while computer scientists did the heavy lifting. More on that later.

Recently, HPCwire had an opportunity to talk at length with Rick Stevens about DOE’s effort to develop a strategy and eventually an implementation plan – an AI for Science Plan, not unlike the Exascale Initiative. It was a wide-ranging conversation touching on: infrastructure needs; constituency changes; AI’s effect on Post-Moore’s Law era; AI-driven code debugging and systems management; the need for scientific AI benchmarks; and much more. (Thank you Rick) Presented here is a portion of that conversation. It makes good midsummer reading. Enjoy.

HPCwire: Walk us through the program, give us a sense of what these AI and science town halls are all about and what they are trying to accomplish?

RS: If you remember back in 2007, we had three town hall meetings – at Argonne, Berkeley and Oak Ridge – that launched the whole DOE Exascale project and so forth. At that time the idea was to get people together and ask them, for exascale, what if we could build these faster machines, what would you do with them. It was a way to get people thinking about the possibility of that and of course it took long time to get the exascale computing program going. With these town halls we are kind of asking a variation on that question.

Rick Stevens, Associate Laboratory Director, Argonne National Laboratory

Now we’re asking the question of what’s the opportunity for AI in science or the application of science, particularly in the context of DOE, but more broadly because DOE’s got a lot of collaborations with NIH and other agencies. So really asking the fundamental question of what do we have to do in the AI space to make it relevant for science. The point of the town halls – three in the labs and one in Washington in October – is go get people thinking about what opportunities there are in different scientific domains for breakthrough science that can be accomplished by leveraging AI and working AI into simulation, and bringing AI into big data, bringing AI to the facility and so forth.

So that’s the concept; it’s really to get the community moving. Now DOE and other agencies are all part of this national AI initiative that’s been launched in part by the White House executive order this year. Maintain leadership in AI. In that announcement and subsequent OMB budget priority letters that went out to the agencies, it prioritized progress in AI as the number one priority across the agencies. In addition, it challenged agencies to come up with plans, to figure out resourcing levels, to make progress on managing their data, so better for training AI and so forth. It laid out a very high level blueprint as to what the country needs to do to maintain progress in AI and to complement in the academic sector and government what’s going on in the internet companies.

Clearly there’s huge progress in the internet space, but the Facebooks and Googles and Microsofts and Amazons and so on, they are not going to be the primary drivers for AI in areas like high-energy physics or nuclear energy or wind power or for cancer – it’s not their business focus. So the challenge is how to leverage the investments made by the private sector to build on those to add what’s missing for scientific applications – and there’s lots of things missing. Then figure out what the computing community has to do to position the infrastructure and our investments in software and algorithms and math and so on to bring the AI opportunity closer to where we currently are.

The town halls will produce a report. It will be out by the end of the year. That report will inform program planning, budget planning, strategic planning, certainly at the Department of Energy, but the October meeting will also have eight other agencies there, so it will influence their thinking as well.

Let me pause there.

HPCwire: We’ve talked at HPCwire about how AI writ large encompasses many technologies that have been bubbling for years but suddenly there’s a sense that AI as it becomes refined has the potential to have deliver a step function in progress. With exascale computing, for example, the expectation is it will allow us to do things at a scale that we were unable to do before. Are we thinking that the infusion of AI, combining it as part of scientific computing, is going to have that same kind of enormous step function impact?

RS: Absolutely. There’s two or three factors to that. In and of itself AI allows us to do things that dramatically can improve rates of discovery in science or just to be able to process large amounts of data using machine learning methods that you can’t do without them. To some degree, various communities have already been embracing machine learning and other AI techniques over the last few years but it’s starting to ramp up. We’re starting to see an exponential growth of the adoption of these things, not only in terms of the number of people but also the number of cycles that are being requested on the big machines.

So first answer is yes. The step function, a non-linear acceleration, and we see that happening various ways. One is clearly we are getting lots of data from experimental sources, from simulations, observations, and to gain insight into that data, to make predictions from that data and to have data-driven modeling, AI is one of the few ways that can keep up with the scale of data so that’s one version of it. Another version of it is we’re starting to see huge opportunities in hybridizations between simulation and machine learning methods, whether you’re using machine learning to control simulation or using machine learning functions embedded in simulations to replace certain functions that we otherwise would have computed explicitly with physical models, now replacing those with machine learning models that often are accurate enough for what we need, but also much faster and kind of self-improving over time.

Simulations themselves are going to evolve in new ways and in many cases by bringing the machine learning in really tight integration with simulations, [making] the simulations go faster; you’ll be getting a performance boost on top of the scaling from exascale and we can then avoid putting cycles into things that are less useful. So with a machine learning algorithm steering a computational campaign, we can probably more accurately determine which simulations we actually have to run to achieve some kind of result.

So there’s that kind of background effect, but probably more interesting is there’s whole new ways of problem solving that we’re starting to see emerging that combine generative networks–this is happening with materials and chemistry and biology in particular. We’re using a class of methods called generative models to generate candidate objects. They can be molecules, they can be molecular configurations in materials, or they can be biological sequences or something, based on training these models on some data sets and then using machine learning to predict properties of these things.

Let’s say you are searching for a drug molecule so you generate thousands or millions of drug candidates, you predict their properties, use active learning to figure out how good your models are, and then you prioritize through active learning say a whole bunch of simulations to prove your understanding. Then you prioritize experiments to collect data where you don’t have enough say parameters for your simulations or machine learning. So it’s the idea that there’s going to be machine learning coupled with simulations and coupled at prioritizing experiments, and that we’re going to add more automation of experiments. [One result is] we’re seeing this incredible interest in growth in robotics in laboratories. So robots that can test thousands of samples per day, or can do experiments in biology in an automated fashion, or can screen things. [Those approaches], of course, have been used for a while in pharma, but it’s now starting to break out in more basic biology and materials science and chemistry.

We are seeing a convergence of all this stuff at the same time, enormous progress in simulation capability, progress in AI, coupled with that progress in robotics and new thinking about how to tie all that together. That’s one of the emerging things from these town halls in spaces like chemistry, materials and biology. In high-energy physics, at CERN for example, those huge detectors, they need to filter data. The vast majority of data gets filtered just to the events that you are interested in, and all the code now that does that filtering was hand crafted years ago. As we improve the accelerators and as we get new detectors and so on, the community is thinking if there’s a better way to do that software. Can much of the trigger software and the analysis software be replaced by machine learning methods? And even can the simulations of detectors be replaced by machine learning methods?

We’re seeing it across the board, so what these town halls are doing is giving us a chance to level set across all the disciplines. We had about 350 people at the meeting here at Argonne. First day was in application breakouts so by science domain, and the second day, we kind of transposed everybody and it was all cross-cutting topics – ranging from data-less cycles to the mathematics of uncertainty quantifications to integration of simulation of AI to facilities issues, integrating with experiments and so on.

HPCwire: From an infrastructure perspective – the computational infrastructure required to run the AI methodologies and to run them in combination with traditional simulation and modeling – what are the key changes needed?

RS: Right now we’re at this interesting place because the exascale machines that were designed are in general similar to Summit and Sierra machines. So at least in the U.S., these machines built around fat nodes with GPUs, large memory, reasonable network, connected to a large amount of non-volatile memory. That’s more or less the same platform that people are using for training deep neural networks. So we’re at this particular moment where the platforms we’ve built for simulation also happen to be very similar to the platforms we are standing up for large neural network projects. Of course we need different precision, 64-bit precision for simulations and we need 32- or 16-bit or even lower precision for the AI things. But for the most part in the next couple of years, these things are going to be done on the same hardware platform, because a) we have it and b) they’re already pretty good – GPU-based systems are really good at training these models and pretty good for inference, and they’re tightly coupled to large amounts of memory, so we are already in a reasonable place.

If we look forward to say 100 exaflops or zettaflop kind of things, it may not be the case that the best architecture for AI problems is also a reasonable architecture for simulation and vice versa. It may be that we have this kind of divergence; it’s actually quite likely we’ll have a divergence because we know AI, at least current AI methods, current deep learning methods, ineffectively use limited precision. They need different kinds of sparsity than numerical simulations and their demands in terms of storage and I/O and memory bandwidth are different. They can be quite intensive but they have a different kind of pattern than the traditional simulations, so it’s quite possible when we look out 5 – 8 years from now that we’ll be faced with some choices. We have architectures that can get optimized for simulation versus things optimized for AI – but how tightly coupled do these functionalities need to be.

Kathy Yelick of Lawrence Berkeley Laboratory is one of three leaders of the AI for Science Town Halls program.

Part of what we’re looking at from the roadmap standpoint, hardware architecture roadmap, software architecture roadmap, is what is needed for the combined kind of simulation AI activities within the context of the DOE infrastructure and is it going to be different systems optimized for different things that are somehow coupled. Are we going to still build things that are really tightly coupled? Is there evolution of architecture? There’s a ton of private capital right now going into AI accelerator developments. Some 40 companies/startups in the U.S. going after this right now – including big guys as well. Not all of those companies are going to survive, some will, and there will be some different ideas than what we have now, things that go beyond GPUs.

One of the things that is coming out of the town halls is the need to have scientific AI benchmarks – benchmarks that reflect the kind of neural network models, deep learning models, that can be for science targets like cosmology or biology and are different than what we need for Facebook or Google. Then we have to understand whether or not the hardware architectures that are being developed by these startups–which are being really optimized for computer vision and natural language–whether those are [able] still to do double duty for the kinds of algorithms that are used in science or whether we have to do something different. So there is a lot of discussion there. I think it’s too early to know where that is going to go. I think the main message is that the exascale machines that are going to be stood up at Argonne and Oak Ridge and Livermore and other places in the next few years at least for the most part are going to be pretty good platforms for doing both simulation and AI in the short term.

HPCwire: A researcher from one of the labs expressed concern that computers will not get faster.

RS: People love to worry about things. The exascale machines will be pretty good architectures, but they’re incredibly expensive machines. I think the challenge isn’t so much that we won’t know how to build faster machines, we can already see how to [get] faster. What’s less clear is whether or not the country can afford to continue to make the investments at the scale needed to actually build faster machines. With the slowing of Moore’s law, the only way you can get faster machines in the future are with architectural improvements and buying more transistors, but transistors aren’t getting cheaper. So we can imagine a billion dollar machine, that’s only twice as expensive as our exaflop machines – and maybe we get some improvements – we get to a factor of 5x or 10x or something faster, but the price isn’t going to go down, it’s going to increase – and so how many billion dollar computers can the country afford. 

HPCwire: A bigger pressure than that is what happens if the market and technology moves to optimize for these things like machine vision that won’t satisfy the traditional modeling and simulation requirements and how expensive will it be to build a computer if you’re not leveraging the commodity economies of scale?

RS: They were really worried about this – if you go back – and you guys even probably wrote about it back 10 years ago, maybe a little further than that, everyone was worried that we were going to have to make our supercomputers out of set top boxes. Remember cable television. Everybody was installing set top boxes, [and] because there was a huge market for microprocessors, the vendors were sort of fixated on it. Then there were gaming chips, the Sony-IBM project and so on, and everyone was kind of saying we’re going to have to live off of whatever architectures the computer graphics world or the gaming world [spurs] because supercomputing wasn’t going to be big enough to drive new fundamental architectures for whatever we needed. Of course that has more or less happened. Machines are all being built out of GPUs, or maybe in Japan, Arm, but we’re not – we meaning the HPC community – we’re not doing a bottoms-up, ground-up, purpose-built architecture from the transistors up – we’re just not doing that; we’re leveraging a lot of commodity stuff.

The real thing people are nervous about is that if machine learning and the current algorithms kind of continue – and this is a big assumption that there won’t be radical changes in algorithms and let’s assume for a second there isn’t – and people get really good at training at low-precision, you may have this fundamental problem: there will be a huge amount of effort in optimizing low-precision and dense arithmetic essentially, and for higher precision sparse stuff, the machines will be sub-optimal. One could argue we’re already kind of in that phase now, and it could get worse.

On the other hand, the danger for the people and the datacenters that have put literally billions of dollars into the AI architectures is that all it takes is one or two good algorithms that could be invented tomorrow that changes fundamentally the kind of hardware that’s needed to make AI go fast, and it’s just way to early [say much about that]. In scientific computing, in traditional PDE solving stuff, we’ve had 30-40 years of trajectory and we have nearly asymptotically optical algorithms for many of these systems, and we know, at least so far, we haven’t found any short-cuts; we need memory bandwidth for example and so on. But in AI it’s still really really early. You could end up in this weird thing – I’m not saying this will happen – but if somebody does figure out how to use spiking neural network chips effectively… Right now we don’t know how to do that even though we have them, we can’t get them to be competitive with GPUs for real heavy lifting, but if somebody breaks that code, then if we get to the very data efficient learning, then GPUs won’t be so useful the way we are thinking of them now. I think it could go either way but I’m not losing any sleep over that part of it. I’m losing more sleep on just the scale of the federal deficit and whether or not the science budgets can support the scale of investments we need for keeping the infrastructure state of the art and competing internationally in terms of the amount of cash that we have to put into this thing.

HPCwire: Comparing developing plans for a national AI program with the Exascale Initiative, what are some of the likely differences between the two?

Jeff Nichols of Oak Ridge National Laboratory is the third leader of the AI for Science Town Halls

RS: One of the cool opportunities that is coming up is the fact that historically the DOE facilities and in some sense the exacale computing program has been built around what we could probably characterize as traditional modeling and simulation. Yes it’s pushing the scale; we want to do things bigger and faster and so on, but we are still doing things like modeling sub-surface or materials or fluid dynamics, or climate or wind turbines or aircraft. Standard fare in some sense. We are pushing the scale and it’s becoming harder, but the community that’s been working on that is essentially the theory, modeling and simulation community. It does touch experiments but usually only in this kind of tangential validation sense.

Once you bring AI into the center, you are no longer just dealing with [just] the part of the community that’s doing modeling and simulation; you are now dealing directly with experimentalists. People that use the light sources, people that use telescopes, people that use accelerators and all kinds of stuff, and who are not modeling people or not theory people, these are what you normally think of as experimentalists. These are people who go to work every day and they generate data and yeah they use computing a little bit but they are not big users of the DOE computing facilities. If those folks now start to see real value in analyzing their data with AI or using their data to train AI to do some predictive models that they use in their experiments, [it’s] almost like a fourth way of doing science. You think of theory and experiment and modeling and simulation as the third way of doing science, but we are kind of inventing a fourth way that’s this kind of data-driven modeling.

The difference that we are seeing is that this could expand by a considerable amount the number of people and the types of applications that the DOE computing facilities would then be working with. And so at this town hall, I asked this question on Monday morning and said how many of you consider yourselves experimentalists and about a third of the room raised their hands. That never would have been the population we were talking to, say, ten years ago. The experimentalists didn’t talk to the HPC people. But now they are first class citizens; they own the data, they generate the data, they have immediate needs for using AI to analyze their data and make predictions. So we potentially will have another whole community segment that becomes part of HPC that suddenly becomes users and becomes stakeholders in architecture and software and everything because they are going to need it for future of experimental science. That’s very cool; it’s a complete change in the composition of the community right. And of course I think there may be some people on the simulation side who are nervous about that because they are the VIPs right – people doing big simulations have been the whole reason that we’ve built the center – and now they will have to share with their experimental colleagues. 

HPCwire: Along those lines – do you see AI techniques and methodologies becoming built in to the software and some of the instruments and some extent less visible to these experimentalists?

RS: Yes, absolutely – it’s going to permeate everything. Their detectors and imagers are going to start to have AI functions, machine learning functions built in – that’s already starting to happen. Sensors will have to integrate data from IoT type stuff in order to do inference at the edge, if you’re training on big machines and you integrate data flows that way. We’re also going to see it in software development itself – this was one of the topics discussed in town halls – how are machine learning / AI methods going to change high performance computing software development? Where we have compilers at run times that get smarter, that can learn from all the code that they see, all the platforms that they generate code for [and] get feedback [from] – are we going to have runtimes that auto-tune not just in the kind of coarse way that we do now but in a really nuanced way by having machine learning functions that can learn not just from your code but from your friend’s code and your neighbors code and the code in the machines next door and so forth. AI that can help us write software and help us debug software.

People talked about operating systems, even just systems architectures that become more self-aware. With our current big machines – there can be  huge power swings based on efficiency and particular things that are running at any given moment, and we don’t really model that or take it much into account. We can do a little bit of that, but you can imagine the future where systems are much more aware of how much power they are consuming, how much I/O they have, whether they’re getting errors on communication channels. They become much more reactive in some sense to the load. So I think you are going to see AI not just affecting the simple kind of built models from a data stream or plug something into a simulation to control simulation – but the whole environment’s going to get permeated by the progress of machine learning, even to the point where we have projects where we want to use it to help evolve computer architectures, where we can train models on all the codes that are currently running, and really get a much better understanding of the distribution of instructions and operand types and reference patterns and so on, and then ask these models that have been learning those patterns, now to generate some architectural candidates that are optimized with respect to that data that they’ve been trained on and we might end up with things that are quite different.

It’s super exciting; it’s kind of like injecting energy at all levels of the ecosystem.

HPCwire: It sounds like all together AI is a driving force for computational progress – I’m wondering about that in the context of Beyond Moore’s. How do you view these AI efforts from a post-Moore’s lens?

RS: It’s somewhat orthogonal – post Moore people mostly think about it as the underlying materials and circuits scaling problem and it becomes an opportunity in architecture. If we can’t build faster and smaller transistors anymore – maybe we can make them smaller for a little while but they won’t be much faster and we’ll have to go 3D and start pushing new materials in at some level – but the real opportunity in post Moore if you factor out things like quantum for a second and think in normal digital computing is architecture. Architectures got boring for a long time because it was hard to compete with just the rate of improvement from Moore’s law. But now that slows down, [and] all of the sudden architecture is king again.

We are already seeing that, right, just the fact that there are all these startups trying to do AI architectures is kind of mind-blowing because that wouldn’t have happened 10-15 years ago because any startup would have gotten blown just by of the progress of Moore’s law. But now that it’s stalled, architecture actually matters. Whether or not AI is an integral part of that or whether it will just go on to leverage the opportunity of new architectures we don’t really know. You can use AI of course to help design architectures. If you go extreme post Moore to where you are talking about non-silicon materials, different radical computing models including at one end of the spectrum say quantum and the other end of the spectrum things like neuromorphic, then the fact that most of those ends are going to be hard to program — and you can imagine that AI based tools can help us program them. So do you want to have half a million quantum computing programmers? Maybe you need to have some AI powered tools that help people think in terms of what would work in those architectures? In that sense AI could be an empowerer of use cases.

But back up from the hyperbole for a second, I tend to think of the Moore issues as completely orthogonal. Silicon has a long way to go before it isn’t the default thing for all kinds of reasons. There will be a lot of architecture innovation and AI is going to drive part of that and benefit part of that. To the degree that we are going to be in a design rich world, then anything that helps you design something is going to be useful. AI methods can certainly help us to design things whether it’s in CMOS or some other technology.

So if I’m making predictions I’d say CAD tools, Cadence, those things are going to start to get smarter and smarter. They are going to take lessons from things like generative models and we already see that – this automated synthesis stuff, [where] you can sketch out what you want and the system will synthesize most of what you need. I think AI will affect a lot of that. It could also potentially affect more fundamental work in post Moore – that is trying to find combinations of materials that have headroom for circuits, things like that. Of course ultimately we want to build computers that have the power efficiency of brains – so we’re orders of magnitude away from that. Whether or not AI can help us accelerate that, it seems possible but it’s not yet clear how to do that.

I’m super optimistic. What I view of the post Moore stuff is actually a way for the materials scientists to participate in this broader ecosystem of contributing to computing. That’s really what’s happening in the DOE space. If you go back 20-30 years ago, the kind of materials science that was done in the labs really was staying away from silicon. They focused on superconductors. Oxalates. Weird cool things. But really not overlapping with silicon microelectronics because the companies that make a living there were investing so much money it was hard to academically compete with that. What’s happened in the last couple years is the realization that we have to go back to basics if we’re going to find something that will ultimately succeed CMOS. That’s a huge opportunity for the material science community to participate again. We’re starting to see DOE take a serious look. They have these basic research needs workshops earlier this year and programs in microelectronics and they haven’t really been in that space – but that’s because there’s opportunity now for fundamental science to make a contribution. I see all this stuff kind of converging, but I do think it’s kind of separate tracks.

I think the ability to invent a new kind of material substrate and get it into some architecture that will then be useful to accelerate AIs – that timeframe is probably 10 to 20-year window. So for the next ten years these things are all going in parallel.

 HPCwire: You spoke for an hour at the first Town Hall event; did we cover some of the same themes you touched on at the meeting?

RS: I gave you an outline – we went through all these things– chemistry, math, materials, climate, biology, high energy physics, nuclear physics, energy — we went through all these kinds of things – of course lots of people, lots of ideas. Then we turned everybody 90 degrees and looked at fundamental math issues, fundamental software issues, data issues, understandability issues, uncertainty quantification, infrastructure, computer architectures. So that’s what we covered– lots and lots of the same things we just talked about – giving you my view and a summary of it.

There’s gonna be a ton of things coming out of each of these town halls. We are kind of rolling them one into the next, so the next one will be influenced to some degree by what we’ve learned in the previous ones and so on. We’ll have a direct report that will be discussed at the Washington meeting, so maybe you guys can come there and get deeper into it. There will be a lot of the political people there – so it’s an opportunity to help get them onboard with what the community is thinking and how we see the possible ways of getting this organized. It’s exciting and we’re trying to get it going. We still have a few years in the exascale program [and] we want this to be sort of starting up as the exascale project rolls over, so we have some kind of continuity going forward.

HPCwire: You said about 350 people – what was the representation?

The first AI for Science Town Hall took place at Argonne National Lab

RS: They were mostly from the Midwest. These are kind of regional things because lots of people don’t have travel money but we did have people from lots of other labs – Oak Ridge and Berkeley and from PNL and Livermore and Los Alamos and SLAC and so on – so a lot of DOE lab people but also a lot of university people – people there from Chicago and Northwestern and from universities out East, Urbana, University of Michigan. So people from all around the Midwest but about 150 people coming from other parts of the country. There will be a core group that will be at all four of the town halls, probably about 40 people or so that are the organizing crew and the team that Kathy and Jeff and I have writing. The people that are organizing this are myself, Jeff Nichols from Oak Ridge and Kathy Yelick from Berkeley.

HPCwire: Great question list on the overarching agenda. I wanted to pose one to you: “What are the 3-5 open questions that need to be addressed to maximally contribute to AI impact in the science domains and AI impact in the enabling technologies?”

RS: There are more than three. The real question is what do I decide to prioritize there. But [I’ll address it] at the super high level. One of the main things is through applying AI and science, this notion of uncertainty quantification or what we sometimes call just model confidence – that’s super important because if you’re classifying cat videos – nobody really cares what your confidence interval is, where your error bars are exactly. But if you are using it in some scientific domain, medical domain – you want to know is that answer likely to be correct – 95 percent likely to be correct or 40 percent likely to be correct and so we have ways of doing that. One way to do that is to make Bayesian models that internally track their own internal degree of confidence. There are other ways to do it as well so that’s a huge important thing. I’d put that up there near the top.

The second thing that we need to know is the community is moving forward on architectures to accelerate AI. They are predominantly focused on two classes of problems: computer vision, and when I say computer vision I typically mean classifying pictures or generating pictures, 2-dimensional color images. The second problem that the community is working on is natural language processing, so language translation, language understanding but in normal speech like Wikipedia speech. [A] favorite example I like to use is the one Google is working on which is automating or making restaurant reservations. Those natural language problems are very common speech. They are not scientific. It doesn’t involve mathematics – it doesn’t involve complex terminology – it doesn’t involve scientific terms – it doesn’t involve vocabulary words that are technical – it doesn’t involve acronyms. And the vision ones are what you and I would normally think of as simple vision problems, not four-dimensional imaging, not hyper-spectral imaging or Fourier imaging, or any of the things we can do in science.

So the second kind of big question is, are the architectures that are being developed to accelerate general AI research – are they in fact even what we need for the types of data and the types of networks and systems we need to build for applying AI in science? We won’t know the answer to that question until we actually build a good library of scientific AI benchmarks and then try to measure how well those hardware architectures do. Of course we can do some of that theoretically, but that is the number two big question.

The number three big question—and I’ll stop at three here—is we do, in DOE in particular, massive amounts of simulation. In fact, our first way of thinking about the world is in some sense by, do we have a mechanistic model of it, a physical model to simulate? Most of the progress in AI involves non-physical modeling. If you think about natural language processing, there’s no physical model for that. If you think about computer vision, most of the kinds of things that people do with computer vision, there’s no physical model, there is no ground truth that you can generate from first principles. But in many scientific areas, we’ve had 400 years of progress—in physics and chemistry and biology and so forth—and we have a lot of physical understanding. How do we use that physical understanding combined with data to build AI models that actually internalize that physical understanding; in other words, having these models be able to make predictions in the world as opposed to in some abstract space.

This in some sense gets at this notion I was talking about early, hybridizing the symbolic kind of AI, where we can reason about physics and math and so on with the data driven AI. Those are the three big problems.

Brief bio:
Rick Stevens has been at Argonne since 1982, and has served as director of the Mathematics and Computer Science Division and also as Acting Associate Laboratory Director for Physical, Biological and Computing Sciences. He is currently leader of Argonne’s Exascale Computing Initiative, and a Professor of Computer Science at the University of Chicago Physical Sciences Collegiate Division. From 2000-2004, Stevens served as Director of the National Science Foundation’s TeraGrid Project and from 1997-2001 as Chief Architect for the National Computational Science Alliance.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently deputy director for the U.S. Department of Energy’s (DOE) Read more…

By Doug Black

Microsoft Azure Adds Graphcore’s IPU

November 15, 2019

Graphcore, the U.K. AI chip developer, is expanding collaboration with Microsoft to offer its intelligent processing units on the Azure cloud, making Microsoft the first large public cloud vendor to offer the IPU designe Read more…

By George Leopold

At SC19: What Is UrgentHPC and Why Is It Needed?

November 14, 2019

The UrgentHPC workshop, taking place Sunday (Nov. 17) at SC19, is focused on using HPC and real-time data for urgent decision making in response to disasters such as wildfires, flooding, health emergencies, and accidents. We chat with organizer Nick Brown, research fellow at EPCC, University of Edinburgh, to learn more. Read more…

By Tiffany Trader

China’s Tencent Server Design Will Use AMD Rome

November 13, 2019

Tencent, the Chinese cloud giant, said it would use AMD’s newest Epyc processor in its internally-designed server. The design win adds further momentum to AMD’s bid to erode rival Intel Corp.’s dominance of the glo Read more…

By George Leopold

NCSA Industry Conference Recap – Part 1

November 13, 2019

Industry Program Director Brendan McGinty welcomed guests to the annual National Center for Supercomputing Applications (NCSA) Industry Conference, October 8-10, on the University of Illinois campus in Urbana (UIUC). One hundred seventy from 40 organizations attended the invitation-only, two-day event. Read more…

By Elizabeth Leake, STEM-Trek

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Data Management – The Key to a Successful AI Project


Five characteristics of an awesome AI data infrastructure

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19!]

AI is powered by data

While neural networks seem to get all the glory, data is the unsung hero of AI projects – data lies at the heart of everything from model training to tuning to selection to validation. Read more…

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing components with Intel Xeon, AMD Epyc, IBM Power, and Arm server ch Read more…

By Tiffany Trader

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently Read more…

By Doug Black

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU)... Read more…

By Doug Black

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researchers of Europe’s NEXTGenIO project, an initiative funded by the European Commission’s Horizon 2020 program to explore this new... Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour


Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This