Accelerating Brain Research with Supercomputers

By Aaron Dubrow

August 5, 2013

The brain is the most complex device in the known universe. With 100 billion neurons connected by a quadrillion synapses, it’s like the world’s most powerful supercomputer on steroids. To top it all off, it runs on only 20 watts of power… about as much as the light in your refrigerator.

These were a few of the introductory ideas discussed by Terrence Sejnowski, Director of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies, a co-director of the Institute for Neural Computation at UC San Diego, an investigator with the Howard Hughes Medical Institute and a member of the advisory committee to the director of National Institutes of Health (NIH) for the BRAIN (Brain Research through Application of Innovative Neurotechnologies) Initiative, which was launched in April 2013.

“I was in the White House when the program was announced,” Sejnowski recalled. “It was very exciting. The President was telling me that my life’s work was going to be a national priority over the next 15 years.”

At that event, the NIH, the National Science Foundation, and the Defense Advanced Research Projects Agency announced their commitment to dedicate about $110 million for the first year to develop innovative tools and techniques that will advance brain studies, which will ramp up as the Initiative gains ground.

In a recent talk in San Diego at the XSEDE13 conference — the annual meeting of researchers, staff and industry who use and support the U.S. cyberinfrastructure — Sejnowski described the rapid progress that neuroscience has made over the last decade and the challenges ahead. High-performance computing, visualization and data management and analysis will play critical roles in the next phase of the neuroscientific revolution, he said. 

A deeper understanding of the brain would advance our grasp of the processes that underlie mental function. Ultimately it may also help doctors comprehend and diagnose mental illness and degenerative diseases of the brain and possibly even intervene to prevent these diseases in the future.

“Not only can we understand what happens when the brain is functioning normally, maybe we can understand what’s happening when it’s not functioning right, as in mental disorders,” he said.

Currently, this dream is a long way off. Brain activity occurs at all scales from the atomic to the macroscopic level, and each behavior contributes to the working of the brain. Sejnowski explained the challenge of understanding even a single aspect of the brain by showing a series of visualizations that illustrated just how interwoven and complex the various components of the brain are. 

One video [pictured below] examined how the axons, dendrites and other components fit together in a small piece of the brain, called the neuropil. He likened the structure to “spaghetti architecture.” A second video showed what looked like fireworks flashing across many regions of the brain and represented the complex choreography by which electrical signals travel in the brain. 

Despite the rapid rate of innovation, the field is still years away from obtaining a full picture of a mouse’s or even a worm’s brain. It would require an accelerated rate of growth to reach the targets that neuroscientists have set for themselves. For that reason, the BRAIN Initiative is focusing on new technologies and tools that could have a transformative impact on the field.

“If we could record data from every neuron in a circuit responsible for a behavior, we could understand the algorithms that the brain uses,” Sejnowski said. “That could help us right now.”

Larger, more comprehensive and capable supercomputers, as well as compatible tools and technologies, are needed to deal with the increasing complexity of the numerical models and the unwieldy datasets gleaned by fMRI or other imaging modalities. Other tools and techniques that Sejnowski believes will be required include industrial-scale electron microscopy; improvements in optogenetics; image segmentation via machine learning; developments in computational geometry; and crowd sourcing to overcome the “Big Data” bottleneck.

“Terry’s talk was very inspiring for the XSEDE13 attendees and the entire XSEDE community,” said Amit Majumdar, technical program chair of XSEDE13. Majumdar directs the scientific computing application group at the San Diego Supercomputer Center (SDSC) and is affiliated with the Department of Radiation Medicine and Applied Sciences at UC San Diego. “With XSEDE being the leader in research cyberinfrastructure, it was great to hear that tools and technologies to access supercomputers and data resources are a big part of the BRAIN Initiative.”

For his part, over the past decade Sejnowski led a team of researchers to create two software environments for brain simulations, called MCell (or Monte Carlo Cell) and Cellblender. MCell combines spatially realistic 3D models of the geometry of the brain (as determined by brain scans and computational analysis), and simulates the movements and reactions of molecules within and between brain cells—for instance, by populating the brain’s 3D geometry with active ion channels, which are responsible for the chemical behavior of the brain. Cellblender visualizes the output of MCell to help computational biologists better understand their results.

Researchers at the Pittsburgh Supercomputing Center, the University of Pittsburgh, and the Salk Institute developed these software packages collaboratively with support from the National Institutes of Health, the Howard Hughes Medical Institute, and the National Science Foundation. The open-source software runs on several of the XSEDE-allocated supercomputers and has generated hundreds of publications.

MCell and Cellblender are a step in the right direction, but they will be stretched to their limits when dealing with massive datasets from new and emerging imaging tools. “We need better algorithms and more computer systems to explore the data and to model it,” Sejnowski said. “This is where the insights will come from — not from the sheer bulk of data, but from what the data is telling us.”

Supercomputers alone will not be enough either, he said. An ambitious, long-term project of this magnitude requires a small army of students and young professional to progress.

Sejnowski likened the announcement of the BRAIN Initiative to the famous speech where John F. Kennedy vowed to send an American to the moon. When Neil Armstrong landed on the moon eight years later, the average age of the NASA engineers that sent him there was 26-years-old. Encouraged by JFK’s passion for space travel and galvanized by competition from the Soviet Union, talented young scientists joined NASA in droves. Sejnowski hopes the same will be true for the neuroscience and computational science fields. 

“This is an idea whose time has come,” he said. “The tools and techniques are maturing at just the right time and all we need is to be given enough resources so we can scale up our research.”

The annual XSEDE conference, organized by the National Science Foundation’s Extreme Science and Engineering Discovery Environment (xsede.org) with the support of corporate and non-profit sponsors, brings together the extended community of individuals interested in advancing research cyberinfrastructure and integrated digital services for the benefit of science and society. XSEDE13 was held July 22-25 in San Diego; XSEDE14 will be held July 13-18 in Atlanta. For more information, visit https://conferences.xsede.org/xsede14

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views

October 16, 2017

Tracking the wildfires raging in northern CA is an unpleasant but necessary part of guiding efforts to fight the fires and safely evacuate affected residents. One such tool – Firemap – is a web-based tool developed b Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Exascale Imperative: New Movie from HPE Makes a Compelling Case

October 13, 2017

Why is pursuing exascale computing so important? In a new video – Hewlett Packard Enterprise: Eighteen Zeros – four HPE executives, a prominent national lab HPC researcher, and HPCwire managing editor Tiffany Trader Read more…

By John Russell

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

OLCF’s 200 Petaflops Summit Machine Still Slated for 2018 Start-up

October 3, 2017

The Department of Energy’s planned 200 petaflops Summit computer, which is currently being installed at Oak Ridge Leadership Computing Facility, is on track t Read more…

By John Russell

US Exascale Program – Some Additional Clarity

September 28, 2017

The last time we left the Department of Energy’s exascale computing program in July, things were looking very positive. Both the U.S. House and Senate had pas Read more…

By Alex R. Larzelere

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This