The civil engineer Konrad Zuse was born in Berlin exactly 100 years ago. In 1941, he built the world’s first computer. And thanks to his pioneering work, the scientists at the Jülich Supercomputing Center have now succeeded in setting a world record by simulating the largest quantum computer system with 42 qubits.
As high performance computing vendors polish their server and workstation portfolios with the latest multicore CPU and GPGPU wonders, Pico Computing is quietly making inroads into the HPC application space with its FPGA-based platforms. By picking the spots where reconfigurable computing makes the most sense, the company is looking to leverage its scalable FPGA technology to greatest effect.
Upgraded machine will sport 192 FPGAs and nearly a terabyte of memory.
A bioinformatics scientist looks beyond the Linux cluster.
Could the chip maker’s rumored interest in FPGAs be part of an HPC strategy?
Online, at conferences and in theory, manycore processors and the use of accelerators such as GPUs and FPGAs are being viewed as the next big revolution in high performance computing. If they can live up to the potential, these accelerators could someday transform how computational science is performed, providing much more computing power and energy efficiency.
Despite all the all the recent hoopla about GPGPUs and eight-core CPUs, proponents of reconfigurable computing continue to sing the praises of FPGA-based HPC. We got the opportunity to ask Dr. Alan George, who runs the NSF Center for High-Performance Reconfigurable Computing, about the work going on there and what he thinks the technology can offer to high performance computing users.
Mitrionics has begun work on an experimental compiler that aims to make parallel programming architecture-agnostic. We asked Stefan Möhl, Mitrionics’ chief science officer and co-founder, what’s behind the new technology and what prompted the decision to expand beyond their FPGA roots.
Once more unto the breach.