Seven Challenges of High Performance Computing

By Nicole Hemsoth

July 21, 2006

During our coverage of the High Performance Computing and Communication conference in March, HPCwire conducted an interview with Douglass Post, chief scientist of the DoD High Performance Computing Modernization Program, where he talked about the major challenges currently facing high performance computing. As the HPC community awaits DARPA's selection of the winners of the High Productivity Computer Systems (HPCS) Phase 2 competition, it may useful to review these challenges in order to understand some of the context of the impending decision. Below is an excerpt of this interview.

—–

Last year, Michael van de Vanter, Mary Zosel and I gave a paper at the International Conference on Software Engineering entitled “HPC needs a tool strategy.” It's available for downloading at www.hpcmo.hpc.mil/Htdocs/HPCMO_Staff/doug_post/papers/HPCNeedsAToolStrategy.pdf. In that paper, we point out that development tools are lagging far behind what's needed. The gains in computer performance are being achieved by increasing the complexity of computer architectures. This increases the challenges associated with programming codes for these machines.

In fact, we find that most code developers consider development tools for massively parallel codes as a major risk item. There are good tools, but not enough, and the tools community has too much turnover. One major issue is that there isn't a good business model for parallel tools because the market is so small and unstable. If a tool doesn't attract enough customers, the company fails and the tool vanishes. If a tool attracts a lot of customers, the company prospers (moderately) and gets bought out. Then the tool gets focused on the priorities of the purchaser, and support for the rest of the community fades out.

Examples include the purchase of Pallas and of Kuck and Associates by Intel. Pallas developed and supported VAMPIR, a good tool for MPI, for most major massively parallel platforms. After Intel bought it, support for VAMPIR for non-Intel processors has waned, and by and large disappeared. The same thing happened with the very good Kuck and Associates C and C++ compilers for massively parallel computers. Only companies that make enough to stay in business, but not enough to be really prosperous, like Etnus, who makes the parallel debugger TotalView, and CEI, who make the visualization tool EnSight, are surviving. An exception maybe is the Portland Group, which seems to have carved out a niche in the Linux cluster environment.

Universities develop a lot of tools, but graduate students and post-docs have priorities other than software support for their university careers (like graduating and finding a real job). Vendors often develop tools for their machines, but those tools usually don't work on other platforms. Most major massively parallel codes have to run on many different platforms. The developers then need to learn to use many different sets of tools.

What is needed is a stable set of tools that work on all the relevant platforms, and give the code developers the tools they need to debug their code and optimize its performance, and give the users the tools they need to set up the problem, run the code and analyze the answers. Many different solutions have been discussed, but I think that the only solution that has a realistic chance of working is for the federal computing community to fund the development of set of tools. If the tools are developed and supported by industry, the federal government would have to subsidize the company to provide this service. It would also probably have to “own” the source code to ensure that the tool would survive the company being bought by another company, and there are other complications as well.

Another concept is a “tools consortium” with participation from the vendors. There was a tools consortium several years ago, but it died due to lack of resources. At some point, no one will buy computers they cannot use because the development tools are inadequate. Thus, the vendors have a vested interest in tools. We tried to get some interest in a joint development effort by the DARPA HPCS Phase II vendors, but without much success. The bigger vendors see tools as a source of competitive advantage. As I mentioned, the Portland Group seems to be a good provider of tools for the Linux cluster vendors who don't do their own development. This could grow if the major vendors (IBM, Cray, HP, etc.) started using the Portland tools, but I haven't seen that happening yet.

Computational tools offer society a new problem solving paradigm. They have the potential to provide, for the first time, accurate predictions of complex phenomena for realistic conditions. Before computational tools became available, predictions were generally possible only for simple model problems. Computational tools can include the effects of realistic geometries, all of the materials in the problem and all of the known physical/chemical/biological effects, and address a complete system rather than just a small of the system. Scientific and engineering computational tools offer the potential to rapidly produce optimized designs for systems, explore the limits of those designs, accelerate scientific discoveries, predict the behavior of natural systems like the weather, analyze and plan complex operations involving thousands to millions of individual entities, and analyze and organize enormous amounts of data. However, realizing this potential has many challenges.

I see at least seven major challenges in computational science and engineering, which I list below in rough order of the difficulty and importance:

1. Establishing the culture shift in the scientific and engineering community to add computational tools to the suite of existing engineering design and scientific discovery tools.

Although the use of computational science and engineering is steadily increasing, it's beginning to appear that it will take a generation or more for a paradigm shift from the predominant use of traditional scientific and engineering methods to the balanced use of computational and traditional methods to occur. It's an advance that is being made one tool at a time, one field at a time and one application at a time. This is partly due to conservatism and skepticism on the part of scientists and engineers who are understandably reluctant to rely on new, unproven methods when they have the traditional methods that work. Even though computational methods offer the potential to be able to enable discoveries and optimize designs much more quickly, flexibly and accurately, every engineering and scientific discipline is different, and most tools for one community have little or no applicability for other communities.

Also, computational tools are often not easy to use and require considerable judgment and expertise. Generally new tools are not “black boxes” that new users can rely on to give them accurate answers. In almost every case, it takes considerable time and experience for users to develop a comparable level of facility with computational tools that they have with their present methods. Many, if not most, computational tools are not mature in the sense that they have the same level reliability as traditional methods. Maturity will come only after the remaining six issues are dealt with, and there is a lot of experience in each individual community. Historically, this is not surprising. In the absence of catastrophic failures of an existing methodology, almost all new problem solving methodologies and technologies, and indeed all new intellectual paradigms and technological advances, have taken a generation or two to become accepted.

2. Getting sponsors to support the development of scientific and engineering computational tools that take large groups a decade or more to develop.

The development of effective computational tools takes many years (sometimes as long as 10 to 15 years) of significant sized teams (10 to 50 professionals), as well as the success with issues No. 3 through No. 7. This represents a large, upfront investment ($3 million to $15 million for 10 to 15 years, or $30 million to $300 million) before there are large payoffs. That's one reason why it's important for code development projects to emphasize incremental delivery of capability. It's a challenge to convince potential sponsors that they should make investments of this order for an unproven methodology. Although one can make “return on investment” arguments, the numbers are only estimates until one has experience with the computational tool, and that only occurs after the investment has been made. It's the traditional “chicken and egg” problem that has bedeviled most new cultural shifts and paradigms.

Today, if one proposes, as we are doing, to spend $100 million to build a new computational tool to design military aircraft, or plan military operations, there is considerable skepticism that the tool will be worth $100 million even if the tool would save billions of dollars by reducing the technical problems normally found late in the procurement process that lead to schedule delays and expensive design modifications. I think that if we made the same kind of proposal in 2036, people would respond with, “Why would you do it any other way? Tell us something that we don't know.” But it's 2006, not 2036, and the paradigm shift hasn't happened yet. The problem with this issue is that, unless someone supports the development of a computational tool for five to 10 years or more before it becomes available for large scale use, it will never exist, no matter how large the potential value of the tool.

3. Developing accurate computer application codes.

The development of large-scale computational scientific and engineering codes is a challenging endeavor. Case studies indicate that success requires a tightly-knit, well-led, multi-disciplinary and highly competent team to work together for five to 10 years to develop the code. The tool has to provide reasonably complete treatments of all the important effects, be able to run efficiently on all the necessary computer platforms, and produce accurate solutions. In many cases, effects that have time and distance scales that differ by many orders of magnitude have to be integrated, and general algorithms for accomplishing this have not been developed. The design of a computational tool depends crucially on the details of the domain science, and few general rules exist. A code for aircraft design is very different than a code for analysis of chemical reactions. Each code development project is a highly challenging task. The record indicates that as many as 50 percent of these types of code projects fail to achieve their initial milestones and that, in some areas, as many as 33 percent fail to ever produce anything useful. Software engineering for computational science and engineering is a brand-new field and still very immature. As has been the case with other problem solving methodologies, it will take several generations of code projects for the field to mature.

4. Verifying and validating the application codes for the problems of interest.

Verification is ensuring that the code solves the equations accurately (i.e., that the code has few defects and that the mathematics in the code are correct.) Validation is ensuring that the code includes treatments for all the important effects. Results from unverified and unvalidated codes are almost certainly inaccurate and misleading. They are worse than worthless because the user will almost certainly make an incorrect decision if he bases it on the results. The challenge is that both verification and validation methods are incomplete and few in number. Verification usually involves running test problems and comparing the code results with the expected results. The difficulty is that there are generally only a handful of test problems for sub-sections of the code with known results, and generally none for the integrated code. Better verification techniques are urgently needed.

Validation usually means comparing the calculations with experimental data for relevant experiments in the range of the problem of interest. Getting accurate data for validation is challenging. Generally, it has been difficult to find experimentalists who are interested in producing validation data. They are more interested in using experiments to make scientific discoveries. In addition, the agencies that fund experimental research are also much more interested in funding experiments to make scientific discoveries than they are in validating codes. The cost of validation should be part of the cost of developing and deploying the computational tool, yet almost no one budgets adequate funds for validation.

Verification and validation are essential if a computational tool is to be useful. Results from an inadequately verified code likely contain mathematical errors, and can't be relied upon in any way. Results from a code that hasn't been validated for the application of interest likely will miss some important effect, and can't be used as the basis for a decision. Verification and validation are another area that needs to become much more mature. They are beginning to receive more attention, but the challenge is large. At a higher level, if results from a computational tool are to be useful, the uncertainties in the answers are needed. Methodologies for determining the uncertainties are just now beginning to be worked out.

5. Continuing to improve computer performance.

It will be crucial to continue to improve the raw performance of computers above the present levels. Fortunately, computer performance is continuing the exponential growth begun in the 1950s. Today, we have computers in the 100 teraflops range and, within a few years, we will computers in the petaflop range. Memory sizes and storage capacity also are continuing to grow exponentially. Keeping the thermal power within acceptable levels continues to be a major issue, but solutions are being found. The growth in computer power is being accomplished partially by the introduction of massive parallelization. This is usually accompanied by distributing the memory into many discrete segments. As a result, bandwidth and memory latency remain major issues. It takes longer and longer to collect and distribute data from remote memory locations. Massive parallelization has greatly increased the complexity of machine architectures.

The extent of these problems has been masked by the benchmarks used to measure computer performance. The benchmark widely used to rank computers in order of processing power is a linear algebra package, Linpack, that solves a dense system of linear equations (http://www.netlib.org/benchmark/hpl/). It basically tests the speed of the floating point arithmetic units. Performance with this benchmark determines the ranking on the Top500 list, which ranks supercomputers in terms of computing power. The problem is that the performance of most computational science and engineering codes are not measured very well by a single benchmark like the Linpack benchmark. Thus, the computer vendors are in danger of optimizing using the wrong criteria.

The Linpack benchmark doesn't degrade with increasing memory latency nearly as fast as most real applications because memory access is structured for the Linpack benchmark, whereas most real problems require some random access to memory. Also, real problems require integer arithmetic, some need a lot of memory and so on. Due to memory latency, the multiplicity of different types of computing required for a real problem, etc., real codes usually fall far short of the Linpack performance. As a result, computers that are optimized to do well with Linpack are not necessarily optimized to run most scientific and engineering codes.

This has led to the an effort to develop a set of benchmarks that do a better job of representing the workload of a standard set of scientific and engineering codes (e.g., HPC Challenge; http://icl.cs.utk.edu/hpcc/). However, even the HPC Challenge is not really representative. The DoD High Performance Computing Modernization Program bases its measures the performance of candidate systems by running a set of 10 applications that represent their workload.

6. Programming complex massively parallel computers.

The growth in the complexity of computer architecture due to massive parallelization is making it very challenging to develop programs for these new computers. With programs and data strewn across hundreds of thousands of distinct processors and separated memory banks, organizing the exchange of data and the order computations requires very complex logic, a lot of specialized programming and the ability to tolerate faults. Most programs rely on MPI, a message passing library that requires fairly low level logic and commands. Specialized debugging and performance optimization tools are needed. Better programming tools, better memory access models, etc., are needed. Languages that express parallelization at higher levels of abstraction are needed, but they will face the challenge of gaining wide acceptance. Developers of large code projects that take tens of people years to develop will preferentially choose languages that are mature and that are used on many different platforms.

The challenge of performance optimization is heightened by the multiplicity of computer vendors, operating systems and architectures, and the turnover in architectures and platforms every three to five years. Most large codes need to run on several different platforms at any given moment, and have to be ported to new platforms every three to five years. This is much shorter than the 20- to 30-year life of many large application codes. In addition, the codes should ideally be optimized for performance on all of the platforms. In reality, the emphasis on performance optimization gives way to the requirement to port the code to multiple platforms. In reality, computer vendors have pushed a large part of the challenge to get good performance from the hardware onto the applications programmers. Code developers now not only have to develop codes with much greater domain science complexity, but also have to cope with computer architectures of greater complexity.

Part of the performance challenge is many solution algorithms don't scale well with the number of processors. Many codes will have to be rewritten to employ algorithms that scale better. In cases where scalable algorithms don't yet exist, the challenge of inventing them awaits the code developer.

7. Using the complicated computational science and engineering codes to solve problems.

Finally, the payoff for developing the computational tool and the computer comes when the production user employs the computational tools to solve real problems.

Finally, the payoff for developing the computational tools and the computer comes when the production user employs the computational tools to solve real problems. Getting solutions to their problems is, after all, why sponsors pay for the computers and the codes. Almost none of the largest scientific and engineering codes can be treated as “black boxes.” A skilled and knowledgeable user with a lot of knowledge in the problem domain is an absolute necessity. Examples abound of users who get incorrect answers with a good code for cases where a skilled user using the same code was able to get a correct answer. Interpretation of the code results is also challenging. A large, massively parallel computation may produce terabytes of data. Extracting information from such datasets is a massive challenge. Setting problems up to run is also challenging. It can often take three to six months to setup a mesh for a complicated problem starting with a geometric description from a CADCAM output file.

Thus, while computational science and engineering has great potential, there are significant challenges to realize that promise.

—–

Douglass E. Post has been developing and applying large-scale multi-physics simulations for almost 35 years. He is the Chief Scientist of the DoD High Performance Computing Modernization Program and a member of the senior technical staff of the Carnegie Mellon University Software Engineering Institute. He also leads the multi-institutional DARPA High Productivity Computing Systems Existing Code Analysis team. Doug received a Ph.D. in Physics from Stanford University in 1975. He led the tokamak modeling group at Princeton University Plasma Physics Laboratory from 1975 to 1993 and served as head of International Thermonuclear Experimental Reactor (ITER) Joint Central Team Physics Project Unit (1988-1990), and head of ITER Joint Central Team In-vessel Physics Group (1993-1998). More recently, he was the A-X Associate Division Leader for Simulation at Lawrence Livermore National Laboratory (1998-2000) and the Deputy X Division Leader for Simulation at the Los Alamos National Laboratory (2001-2002), positions that involved leadership of major portions of the US nuclear weapons simulation program. He has published over 230 refereed papers, conference papers and books in computational, experimental and theoretical physics and software engineering with over 5,000 citations. He is a Fellow of the American Physical Society, the American Nuclear Society, and the Institute of Electrical and Electronic Engineers. He serves as an Associate Editor-in-Chief of the joint AIP/IEEE publication Computing in Science and Engineering.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first planned U.S. exascale computer. Intel also provided a glimpse of Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutting for the Expo Hall opening is Monday at 6:45pm, with the Read more…

By Tiffany Trader

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently deputy director for the U.S. Department of Energy’s (DOE) Read more…

By Doug Black

Microsoft Azure Adds Graphcore’s IPU

November 15, 2019

Graphcore, the U.K. AI chip developer, is expanding collaboration with Microsoft to offer its intelligent processing units on the Azure cloud, making Microsoft the first large public cloud vendor to offer the IPU designe Read more…

By George Leopold

At SC19: What Is UrgentHPC and Why Is It Needed?

November 14, 2019

The UrgentHPC workshop, taking place Sunday (Nov. 17) at SC19, is focused on using HPC and real-time data for urgent decision making in response to disasters such as wildfires, flooding, health emergencies, and accidents. We chat with organizer Nick Brown, research fellow at EPCC, University of Edinburgh, to learn more. Read more…

By Tiffany Trader

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Data Management – The Key to a Successful AI Project

 

Five characteristics of an awesome AI data infrastructure

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19!]

AI is powered by data

While neural networks seem to get all the glory, data is the unsung hero of AI projects – data lies at the heart of everything from model training to tuning to selection to validation. Read more…

China’s Tencent Server Design Will Use AMD Rome

November 13, 2019

Tencent, the Chinese cloud giant, said it would use AMD’s newest Epyc processor in its internally-designed server. The design win adds further momentum to AMD’s bid to erode rival Intel Corp.’s dominance of the glo Read more…

By George Leopold

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently Read more…

By Doug Black

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU)... Read more…

By Doug Black

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researchers of Europe’s NEXTGenIO project, an initiative funded by the European Commission’s Horizon 2020 program to explore this new... Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This