FPGAs in the HPC Landscape

By Christopher Lazou

April 6, 2007

For the 3rd year running, the UK National HPC service at the University of Manchester has organised an excellent technical symposium on reconfigurable computing with Field Programmable Gate Arrays (FPGAs). The event, which took place March 27-29, was co-hosted by the University of Manchester and the US National Centre for Supercomputing Applications (NCSA) and was sponsored by SGI, Nallatech and the UK Institute of Physics (ITEC).

This symposium was targeted at researchers and vendors actively involved in high performance reconfigurable computing, FPGAs and high performance computing. It was preceded by a hands-on workshop on how to program FPGAs for HPC applications. The workshop was co-hosted by Mitrionics Inc., developer of the Mitrion Virtual Processor and Mitrion Software Development Kit, and by SGI, manufacturer of FPGA (Field Programmable Gate Array)-based SGI Altix family servers with SGI RASC RC100 computation blades.

This full-day workshop was titled “20x faster NCBI BLAST — Practical Programming of FPGA Supercomputing Applications” and was supervised by Matthias Fouquet-Lapar, principal engineer from SGI, and Stefan Möhl, CTO and co-founder of Mitrionics. It covered a broad range of introductory-to-advanced topics, using the acceleration of the NCBI BLAST application as an example of a successful real code implementation.

BLAST (Basic Local Alignment Search Tool) is the primary tool for sequence comparisons in bioinformatics and contains several subprograms for different computational problems. These subprograms all use a heuristic search algorithm designed to speed up computations while retaining sensitivity. The amount of sequence data in public databases has been growing faster than CPU speed, making speed a fundamental problem in bioinformatics data mining.

Mitrion-accelerated BLAST applications are designed to run on the Mitrion Virtual Processor operating in FPGA-based computer systems, including the SGI RASC RC100 computation blade in SGI Altix family servers, built with dual Xilinx Virtex-4 FPGAs. The turnkey BLAST application provides instant FPGA supercomputing performance acceleration without requiring any development costs in time and without user risk. It was claimed that the Mitrion-accelerated BLAST marks a major industry milestone by achieving significant performance increases over traditional processors, and that it is the first commercially available FPGA-accelerated application to run on systems from a major vendor.

Using the BLAST implementation, the workshop illustrated how the fine-grained, massively parallel Mitrion Virtual Processor, the core of the Mitrion Platform, works in practice. Unlike C, which is an imperative language, Mitrion-C is a functional language, i.e., data driven. Using the functional attributes of the Mitrion-C, the Mitrion Virtual Processor has a unique architecture capable of adapting to each program it runs in order to maximize performance. This dramatically reduces the total development costs for FPGA-based software acceleration, and more importantly, it enables the supercomputing industry to benefit from FPGA application acceleration. A big plus is that FPGAs need a lot less electrical power than conventional CPUs.

“The Mitrion-C is specifically designed to optimize parallel programming, which is at the core of what makes running applications on FPGAs so powerful,” said Stefan Möhl from Mitrionics during the workshop. He went on to say: “With the Mitrion Virtual Processor running in the FPGA, the enhanced performance becomes accessible to scientists and developers, without any need for hardware design skills. We combine the performance of dedicated hardware with the programmability of parallel processors”.

The accelerated BLAST Mitrion implementation is available for downloading (visit www.sourceforge.net or www.mitrion.com for more information).

Many of the workshop attendees I spoke to were enthused about their achievements in this new area of computing, claiming extraordinary performance.

For example, Dr Charles Gillan from the University of Belfast attended last year's workshop and this year reported on his experience using an SGI RASC module and Mitrion-C to compute the two electron integrals in electron scattering by hydrogen atoms at intermediate impact energies. He started with legacy codes written in Fortran, the atomic R-matrix code circa 1972 and the molecular code circa 1981. These codes were converted to Mitrion-C on the SGI Athena Blade, and then he used the Mitrion Platform to develop FPGA designs and run them on the SGI RASC RC100.

Charles praised the graphical representation in the Mitrion Platform, saying he found it very useful in understanding the design during development. He concluded that Mitrion-C is a powerful tool, although some programmer re-education is needed to make the transition to full parallel thinking, as with any other environment. Simulation is a critical step. His advise is to not go anywhere near the hardware until you simulate your whole problem as a design.

Other speakers at the symposium included experts from AMD, SGI, Mitrionics, Nallatech, ORNL, NCSA, George Washington University, Cape Town University in South Africa, and FPGA users and researchers from several UK research laboratories and universities, including the Edinburgh Parallel Computing Centre (EPCC).

To recap, FPGAs are part of a class of devices known as PLDs (Programmable Logic Devices), which can be programmed in the field after manufacture. For a special class of applications, as for example, cryptography and especially those needing integer or fixed-point arithmetic, the benefits of FPGAS can be very significant, two orders of magnitude speed improvement compared to using a conventional cluster of CPUs. As demonstrated at the workshop, bioinformatics is a very suitable candidate for FPGA treatment. Apart from the potential performance gains, FPGAs have low electrical power needs, an added benefit and incentive.

FPGAs have been around for over twenty years in embedded systems and like their newer accelerator brethren, GPUs, Cell processors and ClearSpeed boards, they play an important role in their application domain, e.g., the games domain for Cell and GPUs. The tantalising question is whether these devices will become dominant in HPC systems. Below are highlights from the symposium talks to give a flavour of what was said.

As most of you know Nallatech has many years experience in the embedded market and currently markets the H100 Series FPGAs platform. For ANSI C codes to FPGA compilation, the user can choose to use the Nallatech DIME-C compiler, the Mitrion Software Development Kit, or Impulse-C.

Alan Cantle, president and founder of Nallatech, gave a review of the current state of FPGAs in HPC industry and made some predictions about the future.

In terms of industry profile, the FPGA has garnered significant interest from the HPC community since the product announcements from Cray and SGI in 2004. AMD's Torrenza, a socket specification, can be used to attach FPGAs or other types of hardware accelerators. Intel recently has also opened its front side bus to Xilinx and Altera. The industry has come to accept that because of heat and power constraints, heterogeneity and accelerators are a necessary requirement if they are to maintain the performance gains that they have enjoyed in the past. In short, we are all beginning to see a “market pull” for FPGA technology after more than a decade of “technology push”.

Those familiar with Geoffrey Moore's technology lifecycle will recognize that FPGAs for HPC are now in that uncomfortable territory of “the chasm”, the transition period between fiercely passionate early adopters and mainstream users who are beginning to take an interest but are still not yet convinced. Accelerators, in general, are all in the chasm together and not just the FPGA. They all feature a common goal of becoming an essential component in tomorrow's HPC production platforms. The battles in “the chasm” stretch will decide which of the FPGA vendors become dominant to drive FPGA technology and win the battle against other accelerator technologies.

Vendors have to focus on ensuring that the early adopters of their technology become highly successful with a fully deployed and referential solution. This means that vendors have to move away from focusing on a hot spot piece of customer code and look at a customer's complete system problem. This requires extremely close and collaborative relationships with a few key customers in a chosen market sector, where the benefits of joint success are significant drivers.

An example of a misplaced technology focus is that last year, there was hysteria around the bandwidth and latency issues between the FPGA and host processor, whilst what was needed was a complete view of a customer's system level problem. Accelerating a hot spot by a factor of 10 to 100 times will inevitably result in the system having a bottle neck somewhere else and focusing purely on the host to FPGA communications is extremely short sighted.

Moore then looked at the FPGA and how it fairs with other accelerators. He analysed business and technical aspects that make the FPGA a strong candidate for survival in the future. He also looked at the current relative quietness of vendors in this industry as they all focus heavily in translating their early demonstrators to real commercial realities with their closest and most loyal customers.

He concluded by saying: “The computing industry is starting to transition through its most significant change since the adoption of the PC. It is going to be a very interesting decade, and there will be many winners and losers in the fight to gain a piece of the significant market share that will be on offer”.

Rob Baxter from EPCC and the FPGA High-Performance Computing Alliance (FHPCA), described Maxwell, the recently completed 64-FPGAs parallel computer built by the FHPCA at the University of Edinburgh.

Maxwell comprises 64 Xilinx Virtex 4 FPGAs hosted in a 32-way IBM Blade-centre cluster. Each blade is a diskless 2.8 GHz Intel Xeon with 1 GB main memory and hosts two FPGAs through a PCI-X expansion module. The FPGAs are mounted on two different PCI-X card types: a Nallatech HR101s and an Alpha Data ADM-XRC-4FXs. This provides an interesting mixed architecture and an environment in which to experiment with vendor-neutral programming models.
 
To assist in programming Maxwell, the FHPCA have developed the Parallel Toolkit (PTK). Rather than building a library of generic FPGA cores for linear algebra and the like, i.e., a BLAS-for-FPGA approach, FHPCA took the view that achieving optimal performance with FPGAs requires optimising memory bandwidth.

The approach PTK adopted involves converting as much of the key application kernel as possible to run on the FPGAs, ideally ensuring there is one big data transfer at the start. Once the data are on the FPGA-side, the FPGAs can process them without reference to the host CPUs, exchanging data with each other in full parallel fashion as required. These accelerated kernels are then hidden behind vendor-neutral interfaces to provide a high-level portability for the application.

To illustrate this approach, the FHPCA have ported three real application demonstrators to Maxwell. These are real commercial codes from medical imaging, oil and gas sectors and a typical simulation code in financial services.

For each demonstration code, significant effort has been spent in getting as much of the application as possible to run on the FPGA, leaving the CPUs to do little more than start jobs for running and wrap them up at the end. Early performance results are very encouraging, with all demonstration codes showing at least a factor of six performance improvement per node over 3 GHz Xeon systems.

Tarek El-Ghazawi, from George Washington University, gave a talk titled “Reconfigurable Computers: Readiness for High Performance Computing”. His team considered three representative, commercially-available high-level tools — Impulse-C, Mitrion-C and DSPLogic — in order to create designs for re-configurable computing from high-level languages (HLLs). These tools were evaluated on the Centre's Cray XD1 and were selected to represent imperative programming, functional programming and schematic programming. In spite of the disparity in concepts behind these tools, the methodology adopted was able to uncover the basic differences among them and assess their comparative performance, utilization, and ease-of-use.

The results of this investigation are relevant to any type of system seeking to use the FPGA in cooperation with a microprocessor, as found in products from Cray (including the XT4), SGI, SRC and others. Other programming environments, such as Celoxica's Handel-C, were found to be structurally similar to the HLLs described above, with comparable performance.

For the near future, El-Ghazawi sees the need to develop libraries of common functions (e.g., BLAS and LAPACK) as crucial for widespread adoption of hardware acceleration for HPC. Such functions will make the use of FPGAs as coprocessors much more transparent. This approach, already being undertaken in the GPU world, as well as by ClearSpeed, seems promising for HPC.

Olaf Storaasli, who leads the FPGA research in the Future Technology Group at ORNL, gave an update of the team's evaluation work. As early adopters of new technologies, ORNL wants to know what role FPGAs and other accelerators will play within the overall scheme for providing user services on their systems and for their forthcoming petaflops facility. His talk was titled “Accelerating Scientific Applications with FPGAs”. Using ORNL's Cray XD1, as well as FPGA-based systems from SRC, Xilinx and SGI, he described some methods and applications being explored to maximize performance (while minimizing changes required to port application code) to exploit benefits from FPGAs. He offered an opinion of the relative merits of software tools, such as Mitrion's Platform, CHiMPS from Xilinx, DIME-C from Nallatech, the Rapid RC Toolbox and so on.

HPC vendors are moving into the FPGA space. Cray with the Cray XD1, but more importantly Cray selected the DRC FPGA coprocessors for their HPCS and future supercomputers. SGI has its own RASC FPGA-based Blades and, as I understand, Linux Networx and other vendors are preparing to enter the fray. The AMD Torrenza and Fusion initiatives and Intel's opening its front side bus to Xilinx and Altera are indications that the industry is seriously looking at this option.

ORNL and Cray are also evaluating FPGA speedups for the FASTA sequence comparison parallel applications program from the University of Virginia. Based on successful parallel FASTA results using small data, the Open FPGA benchmark, comprising the comprehensive 6GB human genome sequencing application, was attempted. In addition to biological applications with minimal requirements for floating point calculations, ORNL is exploring several other scientific application codes, including a climate code, to see how they fair in exploiting FPGA computation speedup.

Richard Wain from the CCLRC Daresbury Laboratory gave a talk titled “Putting FPGAs into perspective” giving a users-eye view of FPGAs, Cell and other novel processing technologies. His main theme was that for all these new technologies, programming is a big barrier. HPC has enormous legacy codes and porting them to PLDs is a mammoth task. Richard considered the barriers to adoption of these technologies in the mainstream of scientific HPC, providing examples to illustrate some of these barriers and offering suggestions for how they might be removed in the future.

Some people expressed the view that 2007 could be the breakthrough year for FPGA supercomputing and that the market availability of several real-world FPGA accelerated applications for HPC may become the catalyst. As the demand for high performance and lower power consumption grows, FPGAs and other accelerators are getting more attention. It was claimed that when accelerators such as FPGAs, GPUs and GPGPUs are being compared, FPGAs have strong technological advantages in a number of application areas.

When I asked Matthias Fouquet-Lapar, Principal Engineer from SGI, to comment on the above claim, this is what he said: “SGI always had reliable system operation as the top priority on all of our product lines. Substantial engineering resources are being applied to product development at all stages to include reliability features, such as SECDED on communication channels and memories. FPGAs are doing very well in this aspect, having, for example, on-chip ECC for block RAMs, which is then extended by the SGI infrastructure to attached off-chip SRAMS as well as NL4 channels”.

He went on to say: “These features are an integral part of the system design and cannot be added at a later stage. Current GPGPUs lack this kind of protection mechanism, being driven mainly by the gaming market where, for example, a wrong pixel in a single frame is not even perceived by the user. This is very different for scientific algorithms, where errors will become a problem because of the iterative nature of programs and will eventually be propagated leading to undetected data corruption. Our engineering teams are carefully evaluating all new acceleration techniques; however, high reliability in large and ultra-scale HPC system remains our top priority in the interest of our customers”.

Thus, in a nutshell, 'horses for courses', the GPUs may not be reliable enough for scientific applications.

Last year it was suggested that an Open FPGA organisation be set up. The Open FPGA organisation is now up and running, with some 400 participants from 40 countries on their mailing list. “The mission of Open FPGA is to promote the use of FPGAs in high level and enterprise applications by collaborating, defining, developing and sharing critical information, technologies and best practices”. The Open FPGA community set up a number of working groups to address specific issues, including developing standards and organizing user forums to promote FPGA technology. This symposium allocated a significant time in discussing the proposals from these working groups. The proposals include application requirements, benchmarking, core library interoperability, general interfaces, application libraries and high level language definitions. For further details visit the web site: www.openfpga.org.
 
In summary, at last year's symposium the attendees from the HPC community were mainly early technology adopters and enthusiasts. This year there were examples of some implementations in using FPGAs of real application codes. The realisation by FPGA vendors that ease of use and standards are key factors for propelling the industry forward is a positive sign. It is clear that only by porting real HPC applications onto FPGAs and demonstrating substantial benefits to the user community will generate the traction for a breakthrough into mainstream HPC markets. This is understood and taken onboard by both hardware and software providers.

The consensus view at this symposium was that a number of positive developments have occurred since last year, and FPGAs are increasingly becoming part of HPC. As more silicon is available to play with, computer architectures are being augmented using normal engineering fashion tradeoffs, integrating specialised devices, FPGAs, Cell, ClearSpeed array coprocessors, and graphics cards to perform specific functions, enhancing computing power for specific application domains, without leaving the general purpose computing system environment. FPGAs are competing in the same space as other accelerators and have a strong role to play, whether they represent best technology compared to GPUs, Cell or ClearSpeed is an open question and only time will tell. The future of these devices depends to a great extent on the path taken by the vendors in the HPC industry.

—–

Copyright (c) Christopher Lazou, HiPerCom Consultants, Ltd., UK. March 2007. Brands and names are the property of their respective owners.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

China’s Tencent Server Design Will Use AMD Rome

November 13, 2019

Tencent, the Chinese cloud giant, said it would use AMD’s newest Epyc processor in its internally-designed server. The design win adds further momentum to AMD’s bid to erode rival Intel Corp.’s dominance of the glo Read more…

By George Leopold

NCSA Industry Conference Recap – Part 1

November 13, 2019

Industry Program Director Brendan McGinty welcomed guests to the annual National Center for Supercomputing Applications (NCSA) Industry Conference, October 8-10, on the University of Illinois campus in Urbana (UIUC). On Read more…

By Elizabeth Leake, STEM-Trek

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing components with Intel Xeon, AMD Epyc, IBM Power, and Arm server ch Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. Read more…

By Doug Black

SIA Recognizes Robert Dennard with 2019 Noyce Award

November 12, 2019

If you don’t know what Dennard Scaling is, the chances are strong you don’t labor in electronics. Robert Dennard, longtime IBM researcher, inventor of the DRAM and the fellow for whom Dennard Scaling was named, is th Read more…

By John Russell

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Leveraging Exaflops Performance to Remediate Nuclear Waste

November 12, 2019

Nuclear waste storage sites are a subject of intense controversy and debate; nobody wants the radioactive remnants in their backyard. Now, a collaboration between Berkeley Lab, Pacific Northwest National University (PNNL Read more…

By Oliver Peckham

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This