Visit additional Tabor Communication Publications
November 15, 2010
NEW ORLEANS, Nov. 14 -- Intel Corporation today announced that its Intel Xeon 5600 series processors, announced earlier this year, are at the heart of the world's most powerful supercomputer, the Tianhe-1A. Located at the National Supercomputing Center in Tianjin, China, Tianhe-1A contains 14,396 Intel processors accompanied by accelerator cards, and has demonstrated groundbreaking performance of 2.57 petaflops (quadrillions of calculations per second).
In partnership with Inspur, a computer manufacturer in China, Intel worked closely with the National Supercomputing Center and its technology partners to achieve this groundbreaking performance.
The 36th edition of the TOP500 list of supercomputers, as announced at SC10, Nov. 13-19 in New Orleans, shows that nearly 80 percent of the world's top 500 systems have Intel processors inside. Such machines are increasingly featured in computers designed for geophysics, financial calculations and scientific research focusing on mainstream applications such as improving the safety of football players and enhancing medical imaging. According to the list, Intel chips now power three of the top five systems – in addition to the No. 1 system. Xeon 5600 series processors are a key building block in the No. 3 system (Shenzhen), and the newly listed No. 4 system at the Tokyo Institute of Technology. The New Intel Xeon Processor 7500 series is featured in the Bull Supernode system at CEA, newly listed at No. 6. Intel's ranking has grown incredibly in ten years, going from just six systems in the November 2000 list to 398** or nearly 80 percent today.
"Our Xeon processor roadmap continues to deliver hugely powerful supercomputers that are helping solve mankind's greatest challenges," said Rajeeb Hazra, general manager of Intel's High Performance Computing organization. "Securing the top position on the Top500 is also a source of great pride for us, and demonstrates the tremendous leaps in performance and versatility that our processors are delivering across a range of compelling workloads."
Additional TOP500 Success
In addition to the Tianjin supercomputer, 17,296 Intel chips power the No. 6 system on the list. The CEA system from Bull features the largest shared memory system built around the Xeon 7500 series processor, achieving performance in excess of one petaflop.
Another notable supercomputer hails from the Tokyo Institute of Technology. Featuring Xeon 5600 series processors within an NEC/HP system, this No. 4-ranked supercomputer achieved 2.4 petaflops.
The semi-annual TOP500 list of supercomputers is the work of Hans Meuer of the University of Mannheim, Erich Strohmaier and Horst Simon of the U.S. Department of Energy's National Energy Research Scientific Computing Center, and Jack Dongarra of the University of Tennessee. The complete report is available at www.top500.org.
Intel Many Integrated Core (MIC) Demonstrations
During SC10, Intel conducted demonstrations showcasing the real-world capabilities of the recently announced Intel Many Integrate Core (MIC) "Mike" architecture. These include using Intel MIC architecture as a co-processor running financial derivative Monte Carlo demonstrations that boasted twice the performance of those conducted with prior generation technologies. The Monte Carlo application for Intel MIC was generated using standard C++ code with an Intel MIC-enabled version of the Intel Parallel Studio XE 2011 software development tools, demonstrating how applications for standard Intel CPUs can scale to future Intel MIC products.
Intel also showcased breakthrough compressed medical imaging developed with Mayo Clinic on "Knights Ferry," the first Intel MIC design and development kits. This demonstration used compressed signals to rapidly create high-quality images, reducing the time a patient has to spend having an MRI.
Intel (NASDAQ: INTC), the world leader in silicon innovation, develops technologies, products and initiatives to continually advance how people work and live. Additional information about Intel is available at newsroom.intel.com and blogs.intel.com.
Source: Intel Corp.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.