Center Stage

Selectively Add GPU Acceleration without Ripping Apart Your Data Center

The need for speedy results is critical to business success in energy exploration, manufacturing, life sciences, financial services, and other industries. Companies in these fields are looking for ways to leverage their hardware acceleration to run Big Data and HPC applications faster.

To that end, great efforts are going into modernizing code by parallelizing computations to run on systems that meld the latest generation of Intel Xeon CPUs with NVIDIA GPUs, AMD GPUs, or Intel Xeon PHIs. However, whether designing a complete system from scratch…

Read more...

More Articles

twitter_logo

Weekly Twitter Roundup (Sept. 29, 2016)

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. The tweets that caught our eye this past week are presented below. Read more…

Modern neuroscience methods can study the human brain at unprecedented resolution and generate large and diverse datasets with the potential to enrich our understanding of the most basic functioning of the human brain. The most modern methods are allowing investigators to generate petabytes of brain mapping data like the ones in the figure. Future challenges for the community involve moving neuroscience toward big data and data science and developing methods to capture, harmonize and make all these data available to communities of researchers.

NSF Backs ‘Big Data Spokes’ with $10M in Grants

In recent years the Obama Administration and National Science Foundation have worked to spur growth of big data infrastructure to handle academic, government, and industrial data-intensive research. In 2012, the Big Data Research and Development Initiative was launched by OSTP and last year NSF announced BD Hubs. Read more…

Scientists engineered a new magnetic ferroelectric at the atomic-scale. A false-colored electron microscopy image shows alternating lutetium (yellow) and iron (blue) atomic planes. An extra plane of iron atoms was inserted every ten repeats, substantially changing the magnetic properties. (Credit: Emily Ryan and Megan Holtz/Cornell)

LBNL, Cornell Researchers Push Towards Low Power Devices

Pairing ferroelectric and ferrimagnetic materials so that their alignment can be controlled with a small electric field at near room temperatures has long been challenging. In the latest issue of Nature, a group of researchers from Lawrence Berkley National Lab and Cornell University reports progress that could lead to ultra low-power microprocessors, storage devices and next-generation electronics. Read more…

ansys-gas-turbine-combustor-264484-620x-370x290

SGI, ANSYS Set New Record for Scaling Commercial CAE Code

SGI, the supercomputing vendor recently acquired by HPE, has teamed with ANSYS, the product engineering and simulation software company, to set a new world record for scaling commercial CAE code. According to SGI, the two companies broke a record set last year by running ANSYS Fluent combustion modeling software across 145,000 CPU cores, exceeding by more than 16,000 the old record. Read more…

arrow-graphic-cropped-shutterstock_440180242

Vectors: How the Old Became New Again in Supercomputing

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…
twitter_logo

Weekly Twitter Roundup (Sept. 22, 2016)

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. The tweets that caught our eye this past week are presented below. Read more…

CCC.index

CCC Weighs in on Need for Industry-Academic Collaboration

How best to foster academic-industry collaboration is a hot topic. The National Strategic Computing Initiative calls it out as one of five imperatives. The Department of Energy’s HPC4Mfg program has been liberally doling project grants. Read more…

Luke Shulenburger of Sandia

DOE Invests $16M in Supercomputer Technology to Advance Material Sciences

The Department of Energy (DOE) plans to invest $16 million over the next four years in supercomputer technology that will accelerate the design of new materials by combining “theoretical and experimental efforts to create new validated codes.” The new program will focus on software development that eventually may run on exascale machines. Read more…

ORNL Titan front view

New Genomics Pipeline Combines AWS, Local HPC, and Supercomputing

Declining DNA sequencing costs and the rush to do whole genome sequencing (WGS) of large cohort populations – think 5000 subjects now, but many more thousands soon – presents a formidable computational challenge to researchers attempting to make sense of large cohort datasets. Read more…

vestas-v100-wind-turbine-370x290

Energy Giant Vestas Harnesses HPC and Analytics for Renewables

The energy industry was an early adopter of supercomputing; in fact, energy companies have the most powerful supercomputers in the commercial world. And although HPC in the energy sector is almost exclusively associated with seismic workloads, it also plays a critical role with renewables as well, reflecting the growing maturity of that vertical. Read more…

NCSA_NSF.jpg

Larry Smarr Helps NCSA Celebrate 30th Anniversary

Throughout the past year, the National Center for Supercomputing Applications has been celebrating its 30th anniversary. On Friday, Larry Smarr, whose unsolicited 1983 proposal to the National Science Foundation (NSF) begat NCSA in 1985 and helped spur NSF to create not one but five national centers for supercomputing, gave a celebratory talk at NCSA. Read more…
eye-digital_shutterstock_93870667_500x

Deep Learning Paves Way for Better Diagnostics

Stanford researchers are leveraging GPU-based machines in the Amazon EC2 cloud to run deep learning workloads with the goal of improving diagnostics for a chronic eye disease, called diabetic retinopathy. The disease is a complication of diabetes that can lead to blindness if blood sugar is poorly controlled. It affects about 45 percent of diabetics and 100 million people worldwide, many in developing nations. Read more…
aquila-aquarius-server-boards

Aquila Debuts Warm Water Cooled OCP Server

New Mexico-based technology firm Aquila is announcing the first OCP-inspired server rack to use fixed cold plate liquid cooling technology. Based on the Facebook-initiated Open Compute Project (OCP) standard, the Aquarius rack integrates patented third-generation cooling technology designed by Clustered Systems. The platform supports up to 108 Xeon servers per rack and will target high density HPC and hyperscale computing applications. Read more…

Sponsored Whitepapers

Infrastructure Challenges in HPC Storage

9/22/16 |  DDN | 

Storage challenges in the data deluge are nothing new, but as we prepare for unprecedented information overload, the demand innovative storage strategies is at an all-time high. Read more…

Advanced Scale Computing – Making the Case

8/24/16 |  Zoomdata | 

Today’s leading organizations are dealing with larger data sets, higher volume and disparate data sources, and the need for faster insights. Read more…

Sponsored Multimedia

Do-It-Yourself HPC: How the Cloud Democratizes HPC and Public Data for Researchers

Creating the right technology environment is a time-consuming task for researchers who want to focus on science (not servers or long wait times at supercomputing centers). Read more…

Silicon Mechanics and Van Andel Institute partner to deliver an OpenStack HPC solution

The Van Andel Institute (VAI) worked with Silicon Mechanics to not only provide its users a more powerful platform, but a hybrid OpenStack HPC solution with the flexibility to support VAI’s commitment to improve the health and change the lives of current and future generations. Read more…