Visit additional Tabor Communication Publications
January 24, 2013
RICHLAND, Wash., Jan. 24 – A new supercomputer expected to rank among the world's fastest machines will be ready to run computationally intense climate and biological simulations along with other scientific programs this summer. This computational work will aid research in climate and environmental science, chemical processes, biology-based fuels that can replace fossil fuels, new materials for energy applications and more.
Chosen by a competitive process, Atipa Technologies in Lawrence, Kan., will provide the machine to EMSL, the Department of Energy's Environmental Molecular Sciences Laboratory. EMSL is a national user facility on DOE's Pacific Northwest National Laboratory's campus that provides experimental and high performance computing capabilities to enable users to address environmental and energy challenges through molecular-level theory and experiment. It is also home to the new supercomputer's predecessor, Chinook. As a national user facility resource, the new system will be available to scientists everywhere,who will be able to apply on a competitive basis to use it. Currently, about 400 scientists use Chinook.
"We're developing a supercomputer that will aid energy, environment and basic science missions important to DOE," said PNNL computational scientist Bill Shelton, the associate director at EMSL who manages high performance computing. "Enhanced computing power will benefit our users who conduct experiments and want to verify them with modeling. Integrating computational theory with experiment is critical to accelerating scientific discovery."
Funded by DOE's Office of Science, the new $17 million machine will likely peak at 3.4 quadrillion — 3.4 million billion — calculations per second and be more than 20 times faster than the four-year-old Chinook. The new supercomputer's capacity and speed are expected to rank it among the world's top 20 fastest machines when it comes online. Peaking at 3.4 petaflops, the new computer will be able to do in one hour what would take a typical laptop more than 20 years to do.
Atipa Technologies has been providing high performance computers to DOE and its labs for more than a decade.
"We're excited to have the opportunity to provide the new supercomputer with a theoretical peak performance of 3.38 petaflops and 2.7 petabytes of usable storage. It will be built and deployed by Atipa Technologies in collaboration with Supermicro," said Mike Zheng, president of Atipa Technologies.
As EMSL's flagship high performance computer, researchers from around the world will be able to use it. The EMSL team designed it for researchers who typically need resources of this scale but don't generally have access to such a powerful computer. This wide availability makes it stand out from other supercomputers.
"Its uniqueness is that it will be optimally configured for climate and chemistry simulations and biological analyses," said Shelton.
For example, the new machine will offer added speed for improved climate models. "The new computer provides a wonderful opportunity for climate scientists to get more work done and get each simulation done more quickly," said PNNL climate scientist Phil Rasch. "It is a huge jump in the computing power available to us."
And it will produce more details about how organisms work. "I'm excited because with the amount of data researchers are generating in biology, this supercomputer will open up new avenues for our users," said EMSL biology science lead Scott Baker. "More computing power is like having more pixels in a picture. We'll be able to look at proteins and complex biological interactions more realistically. This will allow us to better understand and control organisms like microbes so that we can develop new renewable fuels."
The design's 196,000 processing units are Intel processors combined with Intel Phi many integrated core (MIC) accelerator cards. The accelerator cards will ratchet up the power. They work with the conventional processors and memory and allow up to 120 extra calculations per node to be performed simultaneously rather than one at a time. (Anyone with a graphics card in their personal computer has taken advantage of a hardware accelerator.)
The system's 23,000 Intel processors have 184,000 gigabytes (184,000 billion bytes) of memory available, about four times as much memory per processor as other supercomputers. The additional memory will allow scientists to use the processors more efficiently for biology, climate research, chemistry and materials science.
Atipa will deliver the computer's components by July 2013 and assemble it at EMSL. The EMSL team will spend a few months installing and configuring the system and getting it up to speed. They expect to have it running for national and international researchers in October 2013. In the meantime, EMSL will be sponsoring a naming contest among EMSL users and friends.
The New Supercomputer's Fast Facts:
DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit the Office of Science website.
Located in America's heartland in Lawrence, Kansas, Atipa Technologies is the High Performance Computing division of Microtech Computers and has been building Linux-based HPC clusters for well over a decade. A privately-held company, Atipa has developed comprehensive Linux solutions for diverse enterprises, academic research labs and Department of Energy research organizations. Over the years, Atipa has had 14 supercomputers in the Top500 and four supercomputers in the Green500.
Source: Pacific Northwest National Laboratory
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.