In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.
As machine learning becomes a more common tool of the trade in HPC, the field of available hardware accelerators (such as CPUs, GPUs, FPGAs and others) has substantially grown. In this paper, a team of authors from MIT’s Lincoln Laboratory Supercomputing Center survey the state of these processors and accelerators for machine learning, identifying trends in power consumption, precision and more.
Authors: Albert Reuther, Peter Michaleas, Michael Jones, Vijay Gadepally, Siddharth Samsi and Jeremy Kepner.
Designing and synthesizing DNA using nanotechnology is a burgeoning field, but computational requirements remain a substantial bottleneck for researchers. These authors – a duo from the University of Illinois at Urbana-Champaign – applied a supercomputing-driven multi-resolution simulation framework, called “MrDNA,” that is able to produce an atom-scale structure of a self-assembled DNA nanosystem in just half an hour.
Authors: Christopher Maffeo and Aleksei Aksimentiev
More and more, high-performance computing resources are becoming an essential item for university departments – and not just the computer scientists. This paper from George Washington University explores the university’s experiences in planning and establishing its first HPC center and presents a set of lessons learned for other universities aiming to acquire HPC resources.
Authors: Glen MacLachlan, Jason Hurlburt, Marco Suarez, Kai Leung Wong, William Burke, Terrence Lewis, Andrew Gallo, Jaroslav Flidr, Raoul Gabiam, Janis Nicholas and Brian Ensor.
Object detection algorithms often have high computational requirements. In this paper, written by a team from the Autonomous University of Tamaulipas, the authors apply a parallelization strategy using Lustre and MPI to conduct face detection on an HPC cluster. The authors found a substantial reduction in the read time of image files and a similarly large improvement in processing speed.
Authors: Hugo Eduardo Camacho Cruz, Julio Cesar González Mariño and Jesús Humberto Foullon Peña.
Visualization is often important to translate research results into something more accessible and usable. The authors – a trio from Virginia Tech – present a workflow that aims to tackle two issues in scientific visualization: rendering scalability and scalability of the representations themselves. To improve these areas, the authors leveraged tools like subsampling and parallel rendering, testing their workflow on HPC clusters.
Authors: Ayat Mohammed, Nicholas F. Polys and Duncan Farrah.
Electroencephalograms (EEGs) are important tests for diagnosing brain conditions like seizures, epileopsy, tumors and more. These authors — a team from the University of California San Diego and the École Polytechnique Fédérale de Lausanne — describe how EEGLAB, an open-source signal processing environment for EEGs, was integrated with the Neuroscience Gateway (NSG), allowing researchers to run EEGLAB on HPC resources through the XSEDE network.
Authors: RamónMartínez-Cancino, Arnaud Delorme, Dung Truong, Fiorenzo Artoni, Kenneth Kreutz-Delgado, Subhashini Sivagnanam, Kenneth Yoshimoto, Amitava Majumdar and Scott Makeig.
With supercomputing becoming a necessity for global competitiveness, some countries are finding themselves at a disadvantage. This study, conducted by researchers from Serbia, Slovenia, and Bosnia and Herzegovina, examines the HPC needs of countries in the Danube region (such as the authors’ own countries, as well as Bulgaria, the Czech Republic and many others). The authors aimed to determine the most significant criteria for the introduction of HPC in each country, outlining the nations’ individualized needs.
Authors: Milovan Tomašević, Lucija Lapuh, Željko Stević, Dragiša Stanujkić and Darjan Karabašević.
Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.