Visit additional Tabor Communication Publications
December 28, 2012
OAK RIDGE, Tenn., Dec. 28-- The Department of Energy's Oak Ridge National Laboratory regained the lead in high-performance computing, enjoyed record-setting recognition for its research and became a showpiece for renewable energy technology during 2012.
DOE's Office of Science reciprocated with favorable marks in its annual appraisal of managing contractor UT-Battelle. DOE cited the laboratory's operation of its scientific user facilities, its "delivery of impactful science" and a successful workforce restructuring to reduce operating costs.
"The dedicated efforts of our laboratory staff in all phases of science and technology and operational support have resulted in an excellent record of delivering science to the nation in 2012," said ORNL Director Thom Mason. "ORNL will continue to set the pace in research toward a clean and secure energy future."
ORNL's 2012 included achievements in both research and support.
ORNL solidified its standing in world-class scientific computing with the upgrade of the Jaguar supercomputer to the 27-petaflop/s Titan, regaining the top spot on the TOP500 list of the world's supercomputers. Titan also proved to be one of the world's most energy efficient number crunchers, ranking No. 3 on the Green500 list.
The Mars Curiosity rover successfully landed on the Red Planet and began transmitting historic data back to Earth, thanks in part to ORNL's role in making the radioisotope-fueled generators that power the NASA vehicle and its suite of instruments.
ORNL set a record for R&D 100 Awards, often called the Oscars of science and technology. Ten technologies involving ORNL research were named among R&D Magazine's top 100. The awards reflected the laboratory's strength in advanced materials research, including technologies related to high-temperature superconducting wire, super-tough protective coatings, advanced absorbents, an advanced rolling mill process and a low-cost, lightweight robotic hand based on additive manufacturing and fluid power.
ORNL officially received its new biomass-fueled steam plant from DOE, Johnson Controls and Nexterra. The new plant will generate up to 60,000 pounds of steam per hour from wood chips instead of fossil fuels.
Lab researchers received accolades from the scientific community. ORNL scientist Steven Zinkle was elected to the National Academy of Engineering for his research in materials subjected to extreme environments. Neutron scattering researcher Herb Mook won the prestigious Onnes Prize for superconductivity research.
ORNL's two world-class neutron facilities welcomed their 10,000th user since the addition of the High Flux Isotope Reactor's cold source and startup of the Spallation Neutron Source in 2006. Neutron scattering experiments at the facilities also resulted in 240 published scientific papers in 2012 (through the beginning of December), almost doubling its publications in the last two years.
Demonstrating the sort of leading-edge science that can be achieved with high-performance computing, an ORNL and University of Tennessee team used the Jaguar supercomputer (replaced this fall by Titan) to calculate the number of isotopes allowed by the laws of physics, in research that was published in the journal Nature.
Two longstanding ORNL institutions observed golden anniversaries: The Radiation Safety Information Computation Center and Oak Ridge Isochronous Cyclotron, which is part of the Holifield Radioactive Ion Beam Facility, marked 50 years of service to the scientific community. A few weeks later, the Holifield Facility ended its 50-year run due to budget cuts, but not before a flurry of last-minute physics experiments.
Community outreach activities included ORNL researchers volunteering hours of personal time to area high school teams for the FIRST Robotics competition, Team UT-Battelle's role in building a house marking Aid to Distressed Families of Appalachian Counties' 25th year and a $150,000 UT-Battelle donation toward STEM-related renovations to the Boy Scouts of America's local camp. The lab's United Way campaign raised more than $900,000 in a financially tough year.
ORNL's efforts in partnerships and technology transfer resulted in 203 new invention disclosures, 89 patent applications, 70 granted patents and the execution of 14 new cooperative research and development agreements with industrial partners.
Wildlife around the laboratory sometimes makes the news. The Oak Ridge Reservation experienced a flurry of bear activity in May, and one sighted bruin was eventually trapped by wildlife officers near the High Flux Isotope Reactor. The "subadult" -- i.e., young -- bear was relocated to an unpopulated area outside the Oak Ridge Reservation.
Finally, ORNL enjoyed one of its most injury-free years on record, continuing a trend of working safely.
This year's "report card" consisted of A-minus and B-plus ratings from DOE's Office of Science. UT-Battelle's award fee is approximately $10.5 million of a possible $11.2 million, or 94 percent of the potential fee.
The Fiscal Year 2012 Fee Determination and Annual Performance Appraisal awarded A-minus ratings for mission accomplishment; design, fabrication and construction; science and technology program management; leadership-stewardship; environment, safety and health; and infrastructure; and B-plus ratings for business systems and safeguards and security.
UT-Battelle manages ORNL for DOE's Office of Science. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit http://science.energy.gov/.
Source: Oak Ridge National Laboratory
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.