Visit additional Tabor Communication Publications
October 06, 2009
KNOXVILLE, Tenn., Oct. 6 -- The University of Tennessee's supercomputer, Kraken, has broken a major barrier to become the world's first academic supercomputer to enter the petascale, performing more than one thousand trillion operations per second, a landmark achievement.
Kraken is only the fourth supercomputer of any kind to break the barrier, and that computing power already is being applied to high-level science that is changing the way researchers study everything from the innermost workings of our cells to giant astrophysics questions that shed light on the origins of the universe.
Along the way, the computer, funded by a $65 million grant to UT Knoxville from the National Science Foundation, has created more than 25 full-time jobs and helped place Tennessee at the center of big science. Kraken first entered operation in late 2007, and has expanded through a series of planned upgrades that have made it progressively faster and more powerful. The computer's most recent upgrade was officially completed today.
"This milestone is an example of the University of Tennessee's growing achievements in the area of supercomputing. It helps us attract better students and faculty, and thus raises the profile of our university and the state of Tennessee," said Interim UT President Jan Simek.
More than 250 projects are either under way or have already been completed on the computer since it was first came online, and a significant number of the projects are being undertaken by Tennessee researchers. In fact, UT Knoxville faculty have conducted 33 projects on the Kraken system -- more than any other university.
Kraken's power makes it possible for scientists to create complex models to simulate processes in the real world in more understandable ways. Those models can be used to address issues from health and medicine to alternative energy.
Among the projects conducted by UT Knoxville scientists on Kraken are: enhancing the efficiency of biofuels in both production and use, developing more effective climate and weather modeling to address issues from severe weather to climate change, creating novel new materials with a wide variety of uses and analyzing disorders that throw the heart out of rhythm.
"Having Kraken has made UT Knoxville a magnet for great faculty and world-leading research," said UT Knoxville Chancellor Jimmy G. Cheek. "Being the first academic computer this powerful means that we will continue not only to enhance our reputation as a research institution, but also that we will continue to take the lead in making life better for people both in Tennessee and around the world."
Kraken is made up of almost 100,000 computing cores, and it gets its power by making those cores work together in the most effective way possible on any given problem. One way to visualize the way Kraken works is by imagining a completely full Neyland Stadium where everyone -- fans, players, coaches and staff -- are working on individual laptops on the same problem. Kraken harnesses that combined power to tackle major scientific questions.
"At over a petaflop of peak computing power, and the ability to routinely run full machine jobs, Kraken will dominate large-scale NSF computing in the near future," said Phil Andrews, director of the National Institute for Computational Science, which manages Kraken. "Its unprecedented computational capability and total available memory will allow academic users to treat problems that were previously inaccessible."
Beyond its computing power, Kraken, a Cray XT5 computer, also has a massive amount of memory to store the information used in scientists' large-scale projects. With 129 terabytes of memory, Kraken can store the equivalent of more than 10 million phonebooks.
As the first computer managed by a university to pass this milestone, Kraken puts UT in front of other major computing centers across the country, while enhancing the national research effort through Kraken's role in NSF's nationwide network of computers called TeraGrid, the largest computational platform for open scientific research. Kraken is housed in the computing facilities at Oak Ridge National Laboratory, which are also home to another petascale computer, called Jaguar.
Source: University of Tennessee
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.