Accelerating Brain Research with Supercomputers

By Aaron Dubrow

August 5, 2013

The brain is the most complex device in the known universe. With 100 billion neurons connected by a quadrillion synapses, it’s like the world’s most powerful supercomputer on steroids. To top it all off, it runs on only 20 watts of power… about as much as the light in your refrigerator.

These were a few of the introductory ideas discussed by Terrence Sejnowski, Director of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies, a co-director of the Institute for Neural Computation at UC San Diego, an investigator with the Howard Hughes Medical Institute and a member of the advisory committee to the director of National Institutes of Health (NIH) for the BRAIN (Brain Research through Application of Innovative Neurotechnologies) Initiative, which was launched in April 2013.

“I was in the White House when the program was announced,” Sejnowski recalled. “It was very exciting. The President was telling me that my life’s work was going to be a national priority over the next 15 years.”

At that event, the NIH, the National Science Foundation, and the Defense Advanced Research Projects Agency announced their commitment to dedicate about $110 million for the first year to develop innovative tools and techniques that will advance brain studies, which will ramp up as the Initiative gains ground.

In a recent talk in San Diego at the XSEDE13 conference — the annual meeting of researchers, staff and industry who use and support the U.S. cyberinfrastructure — Sejnowski described the rapid progress that neuroscience has made over the last decade and the challenges ahead. High-performance computing, visualization and data management and analysis will play critical roles in the next phase of the neuroscientific revolution, he said. 

A deeper understanding of the brain would advance our grasp of the processes that underlie mental function. Ultimately it may also help doctors comprehend and diagnose mental illness and degenerative diseases of the brain and possibly even intervene to prevent these diseases in the future.

“Not only can we understand what happens when the brain is functioning normally, maybe we can understand what’s happening when it’s not functioning right, as in mental disorders,” he said.

Currently, this dream is a long way off. Brain activity occurs at all scales from the atomic to the macroscopic level, and each behavior contributes to the working of the brain. Sejnowski explained the challenge of understanding even a single aspect of the brain by showing a series of visualizations that illustrated just how interwoven and complex the various components of the brain are. 

One video [pictured below] examined how the axons, dendrites and other components fit together in a small piece of the brain, called the neuropil. He likened the structure to “spaghetti architecture.” A second video showed what looked like fireworks flashing across many regions of the brain and represented the complex choreography by which electrical signals travel in the brain. 

Despite the rapid rate of innovation, the field is still years away from obtaining a full picture of a mouse’s or even a worm’s brain. It would require an accelerated rate of growth to reach the targets that neuroscientists have set for themselves. For that reason, the BRAIN Initiative is focusing on new technologies and tools that could have a transformative impact on the field.

“If we could record data from every neuron in a circuit responsible for a behavior, we could understand the algorithms that the brain uses,” Sejnowski said. “That could help us right now.”

Larger, more comprehensive and capable supercomputers, as well as compatible tools and technologies, are needed to deal with the increasing complexity of the numerical models and the unwieldy datasets gleaned by fMRI or other imaging modalities. Other tools and techniques that Sejnowski believes will be required include industrial-scale electron microscopy; improvements in optogenetics; image segmentation via machine learning; developments in computational geometry; and crowd sourcing to overcome the “Big Data” bottleneck.

“Terry’s talk was very inspiring for the XSEDE13 attendees and the entire XSEDE community,” said Amit Majumdar, technical program chair of XSEDE13. Majumdar directs the scientific computing application group at the San Diego Supercomputer Center (SDSC) and is affiliated with the Department of Radiation Medicine and Applied Sciences at UC San Diego. “With XSEDE being the leader in research cyberinfrastructure, it was great to hear that tools and technologies to access supercomputers and data resources are a big part of the BRAIN Initiative.”

For his part, over the past decade Sejnowski led a team of researchers to create two software environments for brain simulations, called MCell (or Monte Carlo Cell) and Cellblender. MCell combines spatially realistic 3D models of the geometry of the brain (as determined by brain scans and computational analysis), and simulates the movements and reactions of molecules within and between brain cells—for instance, by populating the brain’s 3D geometry with active ion channels, which are responsible for the chemical behavior of the brain. Cellblender visualizes the output of MCell to help computational biologists better understand their results.

Researchers at the Pittsburgh Supercomputing Center, the University of Pittsburgh, and the Salk Institute developed these software packages collaboratively with support from the National Institutes of Health, the Howard Hughes Medical Institute, and the National Science Foundation. The open-source software runs on several of the XSEDE-allocated supercomputers and has generated hundreds of publications.

MCell and Cellblender are a step in the right direction, but they will be stretched to their limits when dealing with massive datasets from new and emerging imaging tools. “We need better algorithms and more computer systems to explore the data and to model it,” Sejnowski said. “This is where the insights will come from — not from the sheer bulk of data, but from what the data is telling us.”

Supercomputers alone will not be enough either, he said. An ambitious, long-term project of this magnitude requires a small army of students and young professional to progress.

Sejnowski likened the announcement of the BRAIN Initiative to the famous speech where John F. Kennedy vowed to send an American to the moon. When Neil Armstrong landed on the moon eight years later, the average age of the NASA engineers that sent him there was 26-years-old. Encouraged by JFK’s passion for space travel and galvanized by competition from the Soviet Union, talented young scientists joined NASA in droves. Sejnowski hopes the same will be true for the neuroscience and computational science fields. 

“This is an idea whose time has come,” he said. “The tools and techniques are maturing at just the right time and all we need is to be given enough resources so we can scale up our research.”

The annual XSEDE conference, organized by the National Science Foundation’s Extreme Science and Engineering Discovery Environment (xsede.org) with the support of corporate and non-profit sponsors, brings together the extended community of individuals interested in advancing research cyberinfrastructure and integrated digital services for the benefit of science and society. XSEDE13 was held July 22-25 in San Diego; XSEDE14 will be held July 13-18 in Atlanta. For more information, visit https://conferences.xsede.org/xsede14

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

The New Scalability

April 20, 2021

HPC is all about scalability. The most powerful systems. The biggest data sets. The most cores, the most bytes, the most flops, the most bandwidth. HPC scales! Notwithstanding a few recurring arguments over the last twenty years about scaling up versus scaling out, the definition of scalability... Read more…

Supercomputer-Powered Climate Model Makes Startling Sea Level Rise Prediction

April 19, 2021

The climate science community is tasked with striking a difficult balance: inspiring precisely the amount of alarm commensurate to the climate crisis. Make estimates that are too conservative, and the public might not re Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the large research community it supports, it also sought to optimize Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impact on how large a piece of the DL pie a user can finally enj Read more…

AWS Solution Channel

Research computing with RONIN on AWS

To allow more visibility into and management of Amazon Web Services (AWS) resources and expenses and minimize the cloud skills training required to operate these resources, AWS Partner RONIN created the RONIN research computing platform. Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

The New Scalability

April 20, 2021

HPC is all about scalability. The most powerful systems. The biggest data sets. The most cores, the most bytes, the most flops, the most bandwidth. HPC scales! Notwithstanding a few recurring arguments over the last twenty years about scaling up versus scaling out, the definition of scalability... Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the larg Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impa Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX developer kit. The Clara partnerships announced during... Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU Technology Conference (GTC), held virtually once more due to the pandemic, the company unveiled its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. The announcement of the new... Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fledged partner to CPUs and GPUs in delivering advanced computing. Nvidia is pitching the DPU as an active engine... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire