Visit additional Tabor Communication Publications
November 22, 2010
Nov. 19 -- Renowned supercomputing expert Pete Beckman has been named director of a newly created Exascale Technology and Computing Institute (ETCi) at the U.S. Department of Energy's (DOE) Argonne National Laboratory. Working with scientists and industrial partners from around the world, the ETCi will focus on developing exascale computing to extend scientific discovery and solve critical science and engineering problems.
Exascale computing represents the next generation of supercomputers, systems that will be 1,000 times more powerful than the Tianhe-1A -- a supercomputer in China that was recently named the fastest in the world. Currently, computing speeds are measured in petaflops, representing a quadrillion operations per second. Exascale machines will be measured in exaflops, which are the equivalent of a quintillion, or one million trillion floating point operations per second.
These exascale supercomputers will be powerful enough to produce models of phenomena that are not possible with today's tools. They will be capable of generating large-scale simulations of worldwide climate change to things on a much smaller scale but also extraordinarily complex, like the functions that take place within a single human cell.
"Reaching exascale will require exciting new technologies and novel approaches to hardware and software development," said Rick Stevens, Argonne's associate laboratory director for computing, environment, and life sciences. "Pete's proven leadership and ability to deliver results will be key to harnessing the power of exacale and leading the community to produce unprecedented opportunities for scientific discovery and technical innovations to power American industry."
Prior to this new assignment, Beckman served as director of the Argonne Leadership Computing Facility (ALCF), a world-leading high-performance computing center located at the Argonne site outside of Chicago.
Argonne has a long history of achievement in high-performance computing, from developing advanced computational methods and open source software used worldwide by thousands of scientists to deploying the world's largest platforms for the national scientific community, such as Intrepid, an IBM Blue Gene/P supercomputer and Magellan, a cloud computing platform for scientists.
But making exascale computing possible will require a concerted effort by the entire scientific computing community. Beckman and colleagues from other DOE laboratories and six universities were recently awarded funds to construct a plan for creating an Exascale Software Center that would develop the software for future exascale platforms.
"Supercomputing architectures are rapidly changing," said Beckman. "New technology will necessitate transforming system software and applications to enable new scientific discovery at extreme scales. By using principles of co-design, computer scientists and applied mathematicians, industrial partners, and the scientists using today's supercomputers can work together to make exascale computing a reality."
Over the next 10 years, the community will work together to simultaneously address a number of daunting technical challenges, such as developing ultra-low power designs, 3-D chip configurations, massively parallel programming models, silicon photonics and hybrid multicore architectures.
"I am honored and excited to be a part of such an important initiative," said Beckman. "Exascale computing will be critical in maintaining American competitiveness and our global leadership in high-performance computing. It promises huge benefits in energy, environment, health and national security."
Beckman is also co-chair of the International Exascale Software Project (IESP), co-funded by the National Science Foundation and DOE. Over the last two years, the IESP has organized the world's top scientists to construct a roadmap for exascale software. The roadmap will be published in the January 2011 issue of the International Journal of High Performance Computing Applications.
During the past 20 years, Beckman has designed and built software and architectures for large-scale parallel and distributed computing systems. He joined Argonne in 2002 as director of engineering and chief architect for the TeraGrid, and in 2008 was named director of the ALCF. Under Beckman's guidance, the ALCF successfully deployed the IBM Blue Gene/P system, one of the world's fastest supercomputers, ahead of schedule.
Beckman has also worked in industry, founding a research laboratory in 2000 in Santa Fe, N.M., sponsored by Turbolinux Inc., which developed the world's first dynamic provisioning system for large clusters and datacenters. In 1997, Beckman joined the Advanced Computing Laboratory (ACL) at Los Alamos National Laboratory, where he founded the ACL's Linux cluster team and was instrumental in catalyzing the high-performance Linux computing cluster community. He received a Ph.D. degree in computer science from Indiana University where he helped create the Extreme Computing Laboratory.
About Argonne National Laboratory
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation's first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America 's scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy's Office of Science.
Source: Argonne National Laboratory
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.