Tag: NVIDIA

Machine Learning Guru Sees Future in Multi-GPU Clusters

Apr 30, 2015 |

Machine learning has made enormous strides in the few years, owing in large part to powerful and efficient parallel processing provided by general-purpose GPUs. The latest example of this trend is exemplified by a partnership between New York University’s Center for Data Science and NVIDIA. The mission, says the pair, is to develop next-gen deep learning Read more…

Summit Puts 13 Code Projects Into Readiness Program

Apr 15, 2015 |

When the Oak Ridge National Laboratory’s Summit supercomputer powers up in 2018, it will provide the Department of Energy (DOE) research community with 150 to 300 peak petaflops of computational performance. To extract the highest benefit from this multi-million dollar machine that will be five to ten times the capability of the current fastest US Read more…

Something for Everyone at GPU Technology Conference

Mar 23, 2015 |

Once relegated to the category of specialized gaming hardware, today’s graphics processors are solving some of the world’s toughest computing problems. During GTC15 last week in San Jose, the full breadth and depth of session topics provided even more evidence of how far graphics processors have come from their gaming roots. And while this year deep Read more…

IBM’s First OpenPOWER Server Targets HPC Workloads

Mar 19, 2015 |

The first annual OpenPOWER Summit, held this week in San Jose, Calif., in tandem with NVIDIA’s GPU Technology Conference (GTC), launched a raft of hardware and other announcements intended to cede market share from Intel. On Wednesday, foundation members showed off more than a dozen hardware solutions, an assortment of systems, boards, and cards, and even a new microprocessor Read more…

A Comparison of Heterogeneous and Manycore Programming Models

Mar 2, 2015 |

The high performance computing (HPC) community is heading toward the era of exascale machines, expected to exhibit an unprecedented level of complexity and size. The community agrees that the biggest challenges to future application performance lie with efficient node-level execution that can use all the resources in the node. These nodes might be comprised of Read more…

GPU-Powered Simulations Advance Heart Research

Feb 5, 2015 |

With heart disease topping the list as the number one cause of death worldwide, heart rhythms disorders, or arrhythmias, are worthy of serious concern. Hoping to halt the devastating effects of the disorder is the Victor Chang Cardiac Research Institute (VCCRI) in Darlinghurst, Australia, where researchers are using a supercomputer to better understand, diagnose and Read more…

NVIDIA K80 GPU-System Speeds Up Bioinformatics Tool 12X

Jan 28, 2015 |

Not long ago, the high cost and relative slowness of DNA sequencing were the rate-limiting bottlenecks in biomedical research. Today, post-sequencing data analysis is the biggest challenge. The reason, of course, is the prodigious output from modern next-generation sequencing (NGS) instruments (e.g., Illumina and ThermoFisher/Life Technologies) overwhelming analysis pipelines. Efficiently sifting the data treasure trove Read more…

NVIDIA GPUs Unfold Secrets of the Human Genome

Jan 16, 2015 |

With 3 billion base pairs of DNA on hand, it’s no wonder that genes are able to program nearly ever detail of our physical makeup, from constructing organs to fighting off disease. But how can a system so vast find the right operating manual for one body part, and ignore all the data meant for Read more…

New Code Paradigms Cooking for CORAL

Dec 2, 2014 |

News of the massive CORAL procurement for the next generation pre-exascale systems stole headlines in November, but now that the excitement is simmering, many are beginning to ask critical questions about the architecture—and what it will mean for programmers trying to take advantage of the massive amount of memory, compute, and options to blend the Read more…

NVIDIA Foundation Sponsors Pioneering Cancer Research

Dec 1, 2014 |

NVIDIA employees have stepped up to the plate on behalf of a very worthy cause. Earlier this week, the NVIDIA Foundation dedicated $400,000 in grants to turn innovative computing methods into cancer-fighting technologies as part its Compute the Cure effort. Two projects – one at Boston’s Dana-Farber Cancer Institute and the other based on the distributed computing Read more…