Visit additional Tabor Communication Publications
November 17, 2008
NVIDIA Tesla Powers 29th most powerful supercomputer in the world
AUSTIN, Texas, Nov. 17 -- SC08 -- The Tokyo Institute of Technology (Tokyo Tech) today announced a collaboration with NVIDIA to use NVIDIA Tesla GPUs to boost the computational horsepower of its TSUBAME supercomputer. Through the addition of 170 Tesla S1070 1U systems, the TSUBAME supercomputer now delivers nearly 170 TFLOPS of theoretical peak performance, as well as 77.48 TFLOPS of measured Linpack performance, placing it, again, amongst the top ranks in the world's Top 500 Supercomputers.
"Tokyo Tech is constantly investigating future computing platforms and it had become clear to us that to make the next major leap in performance, TSUBAME had to adopt GPU computing technologies," said Satoshi Matsuoka, division director of the Global Scientific Information and Computing Center at Tokyo Tech. "In testing our key applications, the Tesla GPUs delivered speed-ups that we had never seen before, sometimes even orders of magnitude -- a tremendous competitive boost for our scientists and engineers in reducing their time to solution."
Speaking to the ease of implementation, Matsuoka continued, "The entire upgrade was carried out in 1 week, and the TSUBAME supercomputer remained live throughout. This is an unprecedented feat in top-level supercomputing."
"We are honored to partner with Tokyo Tech -- world famous for their supercomputing expertise and success," said Andy Keane, general manager of the GPU Computing business at NVIDIA. "NVIDIA Tesla breaking into the Top 500 marks a milestone in supercomputing history. The massively parallel GPU is now essential for supercomputing centers worldwide."
The first to achieve Top 500 ranking with an NVIDIA Tesla based GPU cluster, Tokyo Tech is one of hundreds of distinguished universities and supercomputing centers that have adopted GPU based solutions for research. Other leading centers include the National Center of Supercomputing Applications (NCSA) at the University of Illinois, Rice University, University of Heidelberg, University of Maryland, Max Planck Institute and University of North Carolina.
The Tesla S1070 1U GPU system is based on the NVIDIA CUDA parallel architecture. This architecture is accessible through an industry standard C language programming environment that allows developers and researchers to tap into the parallel architecture of the GPU more quickly and easily than any other solution shipping today.
For more information on NVIDIA Tesla S1070, visit www.nvidia.com/object/tesla_s1070.
NVIDIA is the world leader in visual computing technologies and the inventor of the GPU, a high-performance processor, which generates breathtaking, interactive graphics on workstations, personal computers, game consoles, and mobile devices. NVIDIA serves the entertainment and consumer market with its GeForce graphics products, the professional design and visualization market with its Quadro graphics products, and the high-performance computing market with its Tesla computing solutions products. NVIDIA is headquartered in Santa Clara, Calif., and has offices throughout Asia, Europe, and the Americas. For more information, visit www.nvidia.com.
Source: NVIDIA Corp.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.