Visit additional Tabor Communication Publications
October 09, 2009
Solutions impact bioenergy, emissions capture
RICHLAND, Wash., Oct. 9 -- The Department of Energy's Pacific Northwest National Laboratory today celebrated the opening of new facilities that will enable discoveries in biological, computational and subsurface science and developments in bioenergy, carbon sequestration and homeland security.
The $75 million facilities represent the first new buildings on PNNL's campus since 1995. The buildings will primarily support research in biological systems science and data-intensive computing for DOE, the Department of Homeland Security, the National Institutes of Health and other organizations.
"These buildings represent the future of the Laboratory -- providing us advanced equipment and tools needed to have an even greater impact," said PNNL Director Mike Kluse. "We have some great scientists, and these facilities will provide them the equipment and tools they need to advance science and deliver science-based solutions."
More than 300 PNNL staff will work in these buildings -- called the Biological Sciences Facility (BSF) and the Computational Sciences Facility (CSF).
"This is an important step in the modernization of the Laboratory and will move scientists out of Cold War-era facilities to buildings that will enable a new generation of discovery and advancement," said Mike Weis, manager of DOE's Pacific Northwest Site Office. PNNL needed to vacate laboratory and office space it was using on the south end of the nearby Hanford Site by 2011 as part of DOE's environmental cleanup efforts there.
In the BSF, scientists will focus on gaining a fundamental understanding of biological systems that are needed to more effectively use microorganisms for renewable bioenergy and carbon sequestration; prevent contaminants from moving through groundwater; and improve our systems-level understanding of how low-dose radiation and other factors affect human health. BSF will house state-of-the-art analytical equipment and powerful computing capabilities that enable scientists to combine experimental and computational approaches. For example, scientists are studying communities of microbes in hopes of predicting their behavior and then manipulating them to produce a valuable product or process such as renewable bioenergy.
In the CSF, scientists will develop solutions for the growing challenge of data overload -- common to the scientific and national security communities. For example, a single scientific experiment can produce a terabyte of data -- too much for a person to interpret. Intelligence analysts face similar challenges collecting and processing real-time data streams -- from video to audio to text -- they must analyze to better predict and detect threats. PNNL researchers are leaders in the development of data-intensive computing solutions -- a way to capture, manage, analyze and help users understand massive amounts of data using innovative computing hardware and software technologies. CSF includes 10,000 square feet of raised floor space to accommodate data-intensive and high-performance computing hardware and data storage solutions.
CSF is home to the Center for Adaptive Supercomputing Software, which provides solutions for improving the execution speed of irregular, data-intensive applications like power grid analysis and bioinformatics. PNNL researchers who support the National Visualization and Analytics Center will also work in CSF. NVAC is a Department of Homeland Security program operated by PNNL that is helping local and state emergency responders and government analysts understand and address terrorist threats.
The Cowperwood Company, a real estate development company headquartered in New York City, privately financed the buildings and will lease them to Battelle, which operates PNNL for DOE.
CTL Capital, an investment banking firm based in New York City, structured the financing for these facilities. And the Seattle office of KMD Architects, based in San Francisco, designed the buildings. D.E. Harvey Builders, based in Houston, served as the general contractor and led construction. Ground was broken in June 2008.
Another new facility -- the Physical Sciences Facility (PSF) -- is being built to replace capabilities that currently reside in buildings set for demolition on the Hanford Site. Construction began on PSF in 2007 and will be complete in 2010. The PSF comprises three main buildings -- Radiation Detection, Materials Science & Technology, and Ultra-Trace -- as well as a high bay for research, a laboratory located 40 feet below the surface, and a radiation portal monitoring test track. These facilities will house about 450 staff who support national security and energy research missions. DOE's Office of Science, the National Nuclear Security Administration and the Department of Homeland Security are funding the 200,000-square-foot, $224 million facility.
Cowperwood headquartered in New York City, designs, builds and leases office and associated laboratory and classified space to the private sector and federal government. They currently own and manage more than two million square feet of general office and associated space for the General Services Administration, and private sector research and engineering companies performing federal contracts.
About KMD Architects
Founded in 1963 as Kaplan McLaughlin Diaz, KMD Architects has eight offices and 190 employees. The firm opened its Seattle office in 1992 after KMD was retained to design the expansion of Harborview Medical Center.
About D.E. Harvey Builders
D.E. Harvey Builders is a full-service general contractor with offices in Houston, Austin and Washington, D.C. They provide general contracting, pre-construction, design-build and construction management services.
About Pacific Northwest National Laboratory
Pacific Northwest National Laboratory is a Department of Energy Office of Science national laboratory where interdisciplinary teams advance science and technology and deliver solutions to America's most intractable problems in energy, national security and the environment. PNNL employs 4,250 staff, has a $918 million annual budget, and has been managed by Ohio-based Battelle since the lab's inception in 1965. Follow PNNL on Facebook, Linked In and Twitter.
Source: Pacific Northwest National Laboratory
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.