Rutgers University-Newark to Add HPC Cluster

May 1, 2014

May 1 — Rutgers Univeristy-Newark is about to add a $700,000 High-Performance Computing Cluster (HPCC) to its arsenal, which will in turn bolster research efforts in an array of disciplines for years to come.

The cluster, which goes by the name NM3 (for Newark Massive Memory Machine), will be a parallel-computing behemoth running the Linux open-source operating system and containing 1,500 processors (CPUs) and massive amounts of shared random-access memory (RAM) – with all of the CPUs performing complex tasks simultaneously and transmitting data among themselves efficiently. It will also contain significant data-storage capacity.

The net effect: Researchers will increase the scale and speed of their work exponentially.

This will mean better science,” says Rutgers University-Newark Chemistry Professor Michele Pavanello, the principal investigator on the grant for NM3. “And that could translate into more grant dollars in the future to help us expand the infrastructure.”

The Power of the Cluster

The high-performance computing cluster is funded through the November 2012 state bond referendum.  As the principal investigator, Pavanello received input from a group of Rutgers University-Newark science faculty on their research and teaching needs, then came up with a suitable computer architecture to fulfill those needs and incorporated it into the grant proposal.

An assistant professor of theoretical chemistry, Pavanello is an ardent proponent of high-performance computing, which lets researchers perform advanced modeling using complex sets of raw data.

So is Professor Bart Krekelberg, of Rutgers University-Newark’s Center for Molecular and Behavioral Neuroscience (CMBN). He studies how the brain sees, and says that to map brain activity, modern techniques require whole arrays of electrodes to record hundreds of neurons at the same time, generating an enormous amount of data.

“The more areas of the brain we can record simultaneously, and the more electrodes we use, the better picture we get of how it works,” says Krekelberg. “The technology to record brain activity this way is fairly new, and it requires immense computing power to process it, because even sophisticated desktops no longer cut it. They simply take too long to do the job.

In leveraging the power of the HPCC, regardless of the discipline, the process would be the same, says Pavanello.

“Input raw data. Process it. Output it,” he says. “With the cluster, it happens very quickly. The processed data output is much smaller than the original raw data. And that is then ported to a desktop, where researchers can analyze it—or visualize it with a graphical-user-interface.”

To understand what Pavanello means by “very quickly,” consider this: One of Krekelberg’s brain-mapping sessions involving hundreds of electrodes generates about 50GB of data, which takes 12 hours to pre-process on his current servers. NM3 will process that same data in about an hour.

“We can go an order of magnitude larger now and record brain activity with 10 times the number of electrodes,” Krekelberg says. “And soon, recording techniques will let us use even more.”

Pavanello frames this from a chemist’s perspective.

“We’ll be able to do much faster and scaled-up simulations, which provide insights that you can’t get from single experiments,” he says. “The strong simulation side expedites the science.”

Build It and They Will Come

The new HPCC will be housed in the current Rutgers University-Newark Data Center, a 1,000-square-foot room in Engelhard Hall that is home to the campus’s computer infrastructure. There will be enough space in the facility to triple the new cluster’s size as more grant funds come in.

Pavanello is heading up a committee that recently reviewed bids from six HPCC vendors. The hardware will arrive in late April, software uploading and testing will take place in May and June, and NM3 is scheduled to come online in fall 2014.

The psychology, chemistry, and earth & environmental science departments are expected to use the system most intensively, though many other departments across the campus may take interest, including biology, criminal justice and urban planning, to name a few.

Rutgers-Newark faculty and graduate students will have priority access to NM3 initially; researchers at other Rutgers campuses will submit proposals to a formal NM3 leadership committee and be slotted in as time permits.

Pavenello’s goal is to expand NM3’s size and share the cluster openly and democratically among all three Rutgers campuses. “I’d like us to create a true university community around this so that everyone can have access,” he says.

Source: Lawrence Lerner, Rutgers University

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia’s Jensen Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, produ Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry’s first plug-and-play, portable parallel file system that d Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competition. This is the twelfth time that teams of university undergr Read more…

By Dan Olds

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

New Data Management Techniques for Intelligent Simulations

The trend in high performance supercomputer design has evolved – from providing maximum compute capability for complex scalable science applications, to capacity computing utilizing efficient, cost-effective computing power for solving a small number of large problems or a large number of small problems. Read more…

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Bailey Hutchison Convention Center and much of the surrounding Read more…

By Tiffany Trader

Nvidia’s Jensen Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can Read more…

By John Russell

New Panasas High Performance Storage Straddles Commercial-Traditional HPC

November 13, 2018

High performance storage vendor Panasas has launched a new version of its ActiveStor product line this morning featuring what the company said is the industry Read more…

By Doug Black

SC18 Student Cluster Competition – Revealing the Field

November 13, 2018

It’s November again and we’re almost ready for the kick-off of one of the greatest computer sports events in the world – the SC Student Cluster Competitio Read more…

By Dan Olds

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

OpenACC Talks Up Summit and Community Momentum at SC18

November 12, 2018

OpenACC – the directives-based parallel programing model for optimizing applications on heterogeneous architectures – is showcasing user traction and HPC im Read more…

By John Russell

How ASCI Revolutionized the World of High-Performance Computing and Advanced Modeling and Simulation

November 9, 2018

The 1993 Supercomputing Conference was held in Portland, Oregon. That conference and it’s show floor provided a good snapshot of the uncertainty that U.S. supercomputing was facing in the early 1990s. Many of the companies exhibiting that year would soon be gone, either bankrupt or acquired by somebody else. Read more…

By Alex R. Larzelere

At SC18: GM, Boeing, Deere, BP Talk Enterprise HPC Strategies

November 9, 2018

SC18 in Dallas (Nov.11-16) will feature an impressive series of sessions focused on the enterprise HPC deployments at some of the largest industrial companies: Read more…

By Doug Black

SC 30th Anniversary Perennials 1988-2018

November 8, 2018

Many conferences try, fewer succeed. Thirty years ago, no one knew if the first SC would also be the last. Thirty years later, we know it’s the biggest annual Read more…

By Doug Black & Tiffany Trader

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

Leading Solution Providers

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This