Topics » Applications

Container App ‘Singularity’ Eases Scientific Computing

Oct 20, 2016 |

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It’s in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years, and it’s going into other Read more…

IDC: Searching for Dark Energy in the HPC Universe

Oct 20, 2016 |

The latest scientific evidence indicates that the universe is expanding at an accelerating rate and that so-called dark energy is the driver behind this growth. Even though it comprises roughly two-thirds of the universe, not much is known about dark energy because it cannot be directly observed.

SC16 Showcases Use of HPC and Cloud In Cancer Research

Oct 19, 2016 |

The effort to attack cancer with HPC resources has been growing for years. Indeed, it’s accurate to say the sequencing of the human genome was as much a tour de force of HPC as of the new DNA sequencers. Back in June, Department of Energy Secretary Ernst Moniz blogged on the effort (Supercomputers are key Read more…

PGS Adds Second Cray Super to Houston Mega Center

Oct 17, 2016 |

“We’re gonna need a bigger supercomputer” is what Norwegian oil and gas company Petroleum Geo-Services (PGS) must have said to Cray ahead of working with the iconic supercomputer maker to expand its seismic processing capability by a full 50 percent. And it’s not like PGS didn’t already have a big supercomputer.

Researchers Shrink Transistor Gate to One Nanometer

Oct 13, 2016 |

A team of US scientists may have just breathed new life into a faltering Moore’s law and advanced the limits of microelectronic miniaturization with the fabrication of a transistor with a 1nm gate. The breakthrough portends a path beyond silicon-based transistors, which have been widely predicted to hit a wall at 5-nanometers.

ORNL’s Future Technologies Group Tackles Memory and More

Oct 13, 2016 |

“Imagine if you’ve got a Titan-size computer (27 PFlops) and it has main memory that’s partially non-volatile memory and you could just leave your data in that memory between executions then just come back and start computing on that data as it sits in memory,” says Jeffrey Vetter, group leader of the Future Technologies Group at Oak Ridge National Labs.

Presidential Report Explores Best Way to Harness AI

Oct 13, 2016 |

A new report from the Office of Science Technology Policy (OSTP) addresses the fast-growing field of artificial intelligence (AI), which is increasingly poised to reshape the way we live and work. Titled “Preparing for the Future of Artificial Intelligence,” the report makes 23 policy recommendations on a number of topics concerned with the best way to harness the power of machine learning and algorithm-driven intelligence for the benefit of society.

Cray KNL-Based XC40 Shines on STAC–A2 Benchmark

Oct 10, 2016 |

Ever faster financial analysis is a much-sought competitive edge throughout financial services. Last week a new STAC report showed a Cray XC40 Knights Landing solution outperforming a host of alternatives on the STAC-A2 benchmark intended to test technology stacks used for compute-intensive analytic workloads involved in pricing and risk management. According to the report, the Read more…

Bank of Italy Converges HPC and Enterprise Office with New Cluster

Oct 10, 2016 |

The democratization of high performance computing (HPC) and the converged datacenter have been topics of late in the IT community. This is where HPC, high performance data analytics (big data/Hadoop workloads), and enterprise office applications all run on a common clustered compute architecture with a single file system and network.

Power8 with NVLink Coming to the Nimbix Cloud

Oct 6, 2016 |

Starting later this month, HPC professionals and data scientists wishing to try out NVLink’d Nvidia Pascal P100 GPUs won’t have to spend upwards of $100,000 on NVIDIA’s DGX-1 server or fork over about half that for IBM’s Power8 server with NVLink and four Pascal GPUs. Soon they’ll be able to get the power of Pascal Read more…

RENCI/Dell Supercomputer Charts Hurricane Matthew’s Storm Surge

Oct 6, 2016 |

Hurricane Matthew, now headed into Florida having already hammered Haiti and other parts of the Caribbean, is a stark reminder of the importance of computer modeling not only in predicting the storm strength and path but also in predicting and plotting the storm surge which is often its most destructive component. Right now, the Hatteras supercomputer Read more…

BSC Presents Plan to Energize Europe’s Big Data Efforts

Oct 5, 2016 |

Researchers from the Barcelona Supercomputer Center today presented the big data roadmap commissioned by the EU as part of the RETHINK big project intended to identify technology goals, obstacles and actions for developing a more effective big data infrastructure and competitive position for the Europe over the next ten years. Not surprisingly, the leading position Read more…

Dell EMC Engineers Strategy to Democratize HPC

Sep 29, 2016 |

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. “Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure,” said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division.

SGI, ANSYS Set New Record for Scaling Commercial CAE Code

Sep 27, 2016 |

SGI, the supercomputing vendor recently acquired by HPE, has teamed with ANSYS, the product engineering and simulation software company, to set a new world record for scaling commercial CAE code. According to SGI, the two companies broke a record set last year by running ANSYS Fluent combustion modeling software across 145,000 CPU cores, exceeding by Read more…

DOE Invests $16M in Supercomputer Technology to Advance Material Sciences

Sep 22, 2016 |

The Department of Energy (DOE) plans to invest $16 million over the next four years in supercomputer technology that will accelerate the design of new materials by combining “theoretical and experimental efforts to create new validated codes.” The new program will focus on software development that eventually may run on exascale machines. Luke Shulenburger of Sandia Read more…

New Genomics Pipeline Combines AWS, Local HPC, and Supercomputing

Sep 22, 2016 |

Declining DNA sequencing costs and the rush to do whole genome sequencing (WGS) of large cohort populations – think 5000 subjects now, but many more thousands soon – presents a formidable computational challenge to researchers attempting to make sense of large cohort datasets. No single architecture is best. This month researchers report developing a hybrid Read more…

Energy Giant Vestas Harnesses HPC and Analytics for Renewables

Sep 21, 2016 |

The energy industry was an early adopter of supercomputing; in fact, energy companies have the most powerful supercomputers in the commercial world. And although HPC in the energy sector is almost exclusively associated with seismic workloads, it also plays a critical role with renewables as well, reflecting the growing maturity of that vertical. The largest Read more…

Larry Smarr Helps NCSA Celebrate 30th Anniversary

Sep 20, 2016 |

Throughout the past year, the National Center for Supercomputing Applications has been celebrating its 30th anniversary. On Friday, Larry Smarr, whose unsolicited 1983 proposal to the National Science Foundation (NSF) begat NCSA in 1985 and helped spur NSF to create not one but five national centers for supercomputing, gave a celebratory talk at NCSA.

Deep Learning Paves Way for Better Diagnostics

Sep 19, 2016 |

Stanford researchers are leveraging GPU-based machines in the Amazon EC2 cloud to run deep learning workloads with the goal of improving diagnostics for a chronic eye disease, called diabetic retinopathy. The disease is a complication of diabetes that can lead to blindness if blood sugar is poorly controlled. It affects about 45 percent of diabetics and 100 million people worldwide, many in developing nations.

Nvidia Launches Pascal GPUs for Deep Learning Inferencing

Sep 12, 2016 |

Already entrenched in the deep learning community for neural net training, Nvidia wants to secure its place as the go-to chipmaker for datacenter inferencing. At the GPU Technology Conference (GTC) in Beijing Tuesday, Nvidia CEO Jen-Hsun Huang unveiled the latest additions to the Tesla line, Pascal-based P4 and P40 GPU accelerators, as well as new software all aimed at improving performance for inferencing workloads that undergird applications like voice-activated assistants, spam filters, and recommendation engines.