Visit additional Tabor Communication Publications
August 23, 2010
Aug. 23 -- Michael L. Norman has been named to the position of director of the San Diego Supercomputer Center (SDSC) at the University of California, San Diego. Norman's appointment is effective Sept. 1, 2010.
Norman, a distinguished professor of physics at UC San Diego and a globally recognized computational astrophysicist, had been SDSC's interim director since July 2009 and chief scientific officer of the supercomputer center since June 2008.
"Dr. Norman has demonstrated the vision and leadership the SDSC needs as we enter an era of daunting challenges, accelerating changes, and very promising opportunities," said Senior Vice Chancellor for Academic Affairs Paul W. Drake. "We are confident that SDSC will maintain its preeminence under Mike's leadership."
"Dr. Michael Norman has moved the San Diego Supercomputer Center into broader and deeper collaborations with researchers across the university's entire research enterprise, first as Chief Scientific Officer, then as Interim Director of the center," said Arthur B. Ellis, UC San Diego's vice chancellor for research. "Mike has also been one of the architects of our campus' blueprint for research cyberinfrastructure. As the center's new director, Mike will continue to build partnerships within UC San Diego and the UC system and with research institutions nationally and globally."
In addition to serving as a key resource provider for UC San Diego and the entire UC system, SDSC has several new programs and systems at the national level. SDSC late last year won a five-year, $20 million grant from the National Science Foundation (NSF) to build and operate Gordon, the first high-performance supercomputer to employ a vast amount of flash memory SSDs (solid state drives) to help speed solutions now hamstrung by slower spinning disk technology. Slated for operation in mid-2011, Gordon should rate among the top 30 or so supercomputers in the world, capable of doing latency-bound file reads 10 times faster and more efficiently than any high-performance computing system today.
"Gordon will be ideal for tackling data-intensive problems that don't scale well on today's massively parallel supercomputers, such as the analysis of individual genomes to tailor drugs to specific patients, developing more accurate models to predict the impact of earthquakes or other natural disasters on buildings and other structures, and coupled ocean/atmospheric simulations that offer greater insights into what's happening to the planet's climate," said Norman.
In addition, Gordon will be a peer-reviewed, allocated resource on NSF's TeraGrid, and be available to any U.S. researcher. TeraGrid is the nation's largest open-access scientific discovery infrastructure.
Gordon is just one of several new systems that are either already in place or scheduled to go online next year at SDSC, one of the first such centers founded by the National Science Foundation (NSF) 25 years ago. In April SDSC deployed Dash, a smaller prototype of Gordon that gives prospective users an opportunity to explore Gordon's unique architectural features. Also in operation is the Triton Resource, a new data-intensive system featuring some of the most extensive data analysis power available commercially or at any research institution in the country because of its unique large-memory nodes. Intended for use primarily by UC and UC San Diego researchers, the Triton Resource includes a Petascale Data Analysis Facility (PDAF) designed for the analysis of very large data sets, and the Triton Computer Cluster (TCC), a scalable cluster designed as a centralized resource and a highly affordable alternative to less energy-efficient 'closet computers.'
Norman, a pioneer in using advanced computational methods to explore the universe and its beginnings, was named a senior fellow of SDSC in 2000. He also directs the Laboratory for Computational Astrophysics, a collaboration between UC San Diego and SDSC resulting in the widely-used ENZO community code for astrophysics and cosmology simulations on parallel computers.
Norman is the author of over 250 publications in diverse areas of astrophysical research, including how the first stars in the universe formed and the nature of astrophysical jets. Norman's work has earned him numerous honors, including receiving Germany's prestigious Alexander von Humboldt Research Prize, the IEEE Sidney Fernbach Award, and several HPC Challenge Awards. He also is a Fellow of the American Academy of Arts and Sciences, and the American Physical Society. He holds a B.S. in astronomy from Caltech, an M.S. and Ph.D. in engineering and applied science from UC Davis, and completed his post doctoral work at the Max Planck Institute for Astrophysics in Garching, Germany, in 1984.
From 1986 to 2000, Norman held numerous positions at the University of Illinois in Urbana-Champaign, including professor of astronomy from 1991 to 2000. Norman also served as an NCSA associate director and senior research scientist under Larry Smarr, currently UC San Diego's director of the California Institute for Telecommunications and Information Technology (Calit2). Prior to that, he was a staff member at Los Alamos National Laboratory from 1984 to 1986.
As an organized research unit of UC San Diego, SDSC is a national leader in creating and providing cyberinfrastructure for data-intensive research. Cyberinfrastructure refers to an accessible and integrated network of computer-based resources and expertise, focused on accelerating scientific inquiry and discovery. SDSC is a founding member of TeraGrid, the nation's largest open-access scientific discovery infrastructure.
Source: San Diego Supercomputer Center
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.