Visit additional Tabor Communication Publications
August 14, 2008
System will process up to 360 trillion calculations per second, store 60 times more data than the Library of Congress Web archive, and be housed in an ultra energy-efficient data centre
TORONTO, Aug. 14 -- The University of Toronto's SciNet Consortium and IBM today announced an agreement to build Canada's most powerful and energy-efficient supercomputer.
The consortium, which includes the University of Toronto and associated research hospitals, will enhance SciNet's competitive position in globally important research projects. These include ground-breaking research in aerospace, astrophysics, bioinformatics, chemical physics, climate change prediction, medical imaging and the global ATLAS project, which is investigating the forces that govern the universe.
Capable of performing 360 trillion calculations per second, the supercomputer will pioneer an innovative hybrid design containing two systems that can work together or independently, connected to a massive five petabyte storage complex. Because it is a hybrid using IBM's highly efficient iDataPlex system, as well as IBM's advanced POWER6 architecture, the machine is extremely flexible, capable of running a wide range of software at a high level of performance.
As a premier academic research system, the machine is expected to be among the top 20 fastest supercomputers in the world; 30 times faster than the peak performance of Canada's current largest research system. It also represents the second largest system ever built on a university campus, and the largest supercomputer outside the United States.
"The University of Toronto has partnered with IBM to become one of the world's premier computational research institutions -- a collaboration that will attract researchers from around the world," said Dr. Richard Peltier, scientific director of SciNet and director of the Centre for Global Change Science.
As a physicist whose interests are focused on planetary physics and climate change prediction, Dr. Peltier's work includes research on the impacts of greenhouse gas-induced global warming, which will be greatly enhanced by this system. The SciNet facility will be one of the world's most advanced supercomputers for analyzing high-resolution global models to predict future risks, such as the accelerating decrease in Artic sea ice. An immediate project will be the construction of regional climate change predictions for the Province of Ontario and Great Lakes watershed region.
Another area of research for this system will be to explore the modern scientific mystery of why matter has mass and what constitutes the mass of the universe. Beginning in September, the Large Hadron Collider project based in Geneva, the most powerful particle accelerator ever built, will produce vast quantities of data, which scientists hope will be begin to unlock these mysteries. SciNet's computing power and storage capacity will be a significant contributor to the data analysis.
"SciNet will have one of the best facilities in the world that will allow Canadian physicists to participate in the adventure of the Large Hadron Collider," said Dr. Pierre Savard, a member of the Canadian group working at CERN, Geneva. "This research may change our view of matter and the universe."
This facility will involve the largest implementation of IBM's iDataPlex system, which holds twice as many processors per unit as standard systems and is entirely water cooled. More than 4,000 servers will be linked together in this multi-platform solution, including one of the world's largest POWER6 clusters and Intel x86-based clusters. This IBM supercomputer will be one of the first systems to use Future Intel Nehalem processor families, being introduced in early 2009.
"A system this complex could only be designed by bringing together the best minds from the University of Toronto and IBM," said Chris Pratt, strategic initiatives executive at IBM Canada. "This is a tremendous example of public and private collaboration that will benefit the Canadian research community for many years to come."
Funding has been provided by the Canadian Foundation for Innovation's National Platforms Fund, in partnership with the Province of Ontario and the University of Toronto.
Construction of this extremely energy efficient datacenter will begin immediately at a facility just north of Toronto. Installation of the system will begin in the fall with several milestones throughout the winter. It is anticipated that both of the main computing systems will be fully operational by summer 2009.
For more information about IBM, visit www.ibm.com.
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.