Visit additional Tabor Communication Publications
August 09, 2012
LA JOLLA, Calif., Aug. 8 -- The University of California, at San Diego and Yale University have been awarded a collaborative grant by the National Science Foundation (NSF) to develop a Neuroscience Gateway (NSG) that gives neuroscientists broadened access to essential high-performance computing (HPC) and storage resources.
Under the UC San Diego grant, the university's San Diego Supercomputer Center (SDSC) and its Neuroscience Information Framework project will create a software infrastructure that can be used to make neuroscience-specific compute and software tools conveniently available to students and investigators.
The project, called "Advanced Biological Informatics Development: Building A Community Resource for Neuroscientists," will offer compute time to neuroscience users through a streamlined process using a simple web portal-based environment for uploading models, retrieving and storing data, and specifying the parameters for running high-performance computing (HPC)-based neuronal simulations, including querying the status and completion of various jobs. The NSG portal, which is under development, will be available at http://www.nsgportal.org/.
"This gateway will allow neuroscientists to use HPC resources without having to have detailed knowledge about the implementation of the codes on HPC resources, or know all the complexities of how supercomputers work," said SDSC researcher Amit Majumdar, principal investigator for the collaborative award.
The project will enable members of the neuroscience community, including scientists, professors, and students, to use large HPC resources for research and instruction and to run leading simulation and analysis packages to perform tasks such as computational modeling of cells and large neural networks. This will benefit all students and researchers, especially those who lack access to HPC resources and are thus at a significant disadvantage compared to the very few who have it, by removing the barriers for progress for many including historically underrepresented groups.
"Many of these investigators and students would otherwise find it very difficult, if not impossible, to implement and study models that press or exceed the storage and computing speed capabilities that are under their direct control," said Majumdar, who directs the scientific computing applications group at SDSC and is also part of UC San Diego's Department of Radiation Medicine and Applied Sciences.
Subha Sivagnanam and Kenneth Yoshimoto, both from SDSC and who have expertise in software engineering and parallel computing, will help develop the neuroscience gateway. They have been running performance studies of the neuronal simulation software NEURON on SDSC's Triton computer cluster, which has 256 nodes with 2 quad-core Intel(TM) Nehalem processors with clock speed of 2.4 GHz (gigahertz) and 24 GB (gigabytes) of memory.
"Such a gateway will help advance research by lowering or eliminating the administrative and technical barriers that currently make it difficult for neuroscientists to use HPC resources," said Maryann Martone, principal investigator of the Neuroscience Information Framework (NIF) project, and co-principal investigator for the UC San Diego Neuroscience Gateway award. "This work aligns well with the NIF project which provides a portal into the largest source of neuroscience tools and data on the web and is housed in the Center for Research in Biological Systems (CRBS) at UC San Diego. The gateway will also provide a community forum for neuroscientists to collaborate and share their data."
NIF's Anita Bandrowski, a neuroscientist, and Vadim Astakhov, also a parallel computing expert, will help in designing and testing the gateway. Ted Carnevale, principal investigator of the Yale grant, and Michael Hines, both from the Neurobiology Department at the Yale School of Medicine, are also involved in this project. They are the developers of the NEURON software which will be available via the science gateway. Other neuronal software to be provided by the gateway includes GENESIS3, MOOSE, PyNN, and NEST.
"The past two decades have seen an accelerating growth in the use of computational modeling in neuroscience," said Carnevale. "It has also revealed a research bottleneck in terms of accessing and using cyberinfrastructure and there is a broad consensus that the wider computational neuroscience community needs easier access to complex cyberinfrastructure (CI) and HPC resources."
Specifically, the NSG architecture will transparently distribute user jobs to appropriate HPC resources provided by various NSF supercomputer centers. The HPC systems are part of the agency's Extreme Science and Engineering Discovery Environment (XSEDE), one of the most advanced, powerful, and robust collection of integrated advanced digital resources and services in the world.
The NSG team will also collaborate with developers of neural simulation software to optimally install, test, and benchmark these applications, and allow developers to test new versions before release. Project leaders will also target the promotion of the NSG to underrepresented minority scientists and minority serving institutions through active participation in summer training academies and a network of previously mentored female and minority students, some now employed at minority serving institutions.
Storing Larger Datasets in the SDSC Cloud
Majumdar said that one goal of the Neuroscience Gateways project is to use the SDSC Cloud to store results of large neuronal simulations. "The idea is to move the output files produced on various HPC resources to the SDSC Cloud where they will be secure and easily accessed and shared," he said. "If the output data size is smaller, the gateway will allow the users to zip up the output file and email them the output. For larger size output files we are considering associated data transfer, access, and storage issues from the very beginning, and we will be prepared to handle larger simulations and the larger datasets that come with them."
Growing Popularity of Science Gateways
Science gateways have shown tremendous growth in terms of the number of users, the number of processing hours used on HPC resources by the broader user community, and in the number of published research papers enabled. Gateways can also be readily used for teaching classes, workshops, and tutorials without having to set up codes on HPC resources, or create new accounts for students/attendees.
The theme of XSEDE13, the 2nd annual conference for the XSEDE community, is Gateway to Discovery and will be held in San Diego, July 22-25, 2013. "The 2013 conference theme reflects the tremendous impact science gateways have had," said Nancy Wilkins-Diehr, XSEDE13 general chair and director of the Science Gateways program since its inception in the TeraGrid.
For example, the CIPRES Science Gateway, (CIPRES stands for Cyber Infrastructure for Phylogenetic RESearch), created by SDSC researchers for the phylogenetics research community, has proved extremely popular. The number of users submitting jobs to CIPRES Science Gateway increased from 132 per month in December 2009 to more than 700 per month in April 2012. An average of 140 new users ran one or more jobs on CIPRES in every month of operation, and the number of repeat users has increased steadily.
"During the past three quarters, CIPRES Gateway users represented 28% or more of all active XSEDE users, indicating that this interface has been successful in enabling access to XSEDE HPC and cyberinfrastructure resources," said Mark Miller, principal investigator of the CIPRES Gateway.
In addition, a recent survey showed that use of the CIPRES Gateway during 2010 and 2012 enabled at least 384 publications, with 25 more in press, illustrating potential for dramatic impact of Gateway projects on scientific progress, according to Miller. Similar metrics of success and impact, including usage for teaching classes/tutorials, have been demonstrated by other Science Gateways, such as the nanoHUB, GRIDCHEM, and ROBETTA.
The collaborative Neuroscience Gateway grant between UC San Diego and Yale, for three years, totals just over $805,000. The NSF award numbers are 1146949 for UC San Diego, and 1146830 for Yale University.
Source: University of California San Diego
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.