Visit additional Tabor Communication Publications
May 31, 2010
SANTA CLARA, Calif., and HAMBURG, Germany, May 31 -- During the International Supercomputing Conference (ISC), Intel Corporation announced plans to deliver new products based on the Intel Many Integrated Core (MIC) architecture that will create platforms running at trillions of calculations per second, while also retaining the benefits of standard Intel processors.
Targeting high-performance computing segments such as exploration, scientific research and financial or climate simulation, the first product, codenamed "Knights Corner," will be made on Intel's 22-nanometer manufacturing (nm) process -- using transistor structures as small as 22 billionths of a meter -- and will use Moore's Law to scale to more than 50 Intel processing cores on a single chip. While the vast majority of workloads will still run best on award-winning Intel Xeon processors, Intel MIC architecture will help accelerate select highly parallel applications.
Industry design and development kits codenamed "Knights Ferry" are currently shipping to select developers, and beginning in the second half of 2010, Intel will expand the program to deliver an extensive range of developer tools for Intel MIC architecture. Common Intel software tools and optimization techniques between Intel MIC architecture and Intel Xeon processors will support diverse programming models that will place unprecedented performance in the hands of scientists, researchers and engineers, allowing them to increase their pace of discovery and preserve their existing software investments. The Intel MIC architecture is derived from several Intel projects, including "Larrabee" and such Intel Labs research projects as the Single-chip Cloud Computer.
"The CERN openlab team was able to migrate a complex C++ parallel benchmark to the Intel MIC software development platform in just a few days," said Sverre Jarp, CTO of CERN openlab. "The familiar hardware programming model allowed us to get the software running much faster than expected."
"Intel's Xeon processors, and now our new Intel Many Integrated Core architecture products, will further push the boundaries of science and discovery as Intel accelerates solutions to some of humanity's most challenging problems," said Kirk Skaugen, vice president and general manager of Intel's Data Center Group. "The Intel MIC architecture will extend Intel's leading HPC products and solutions that are already in nearly 82 percent of the world's top supercomputers. Today's investments are indicative of Intel's growing commitment to the global HPC community."
The 35th edition of the TOP500 list, which was announced at ISC, shows that Intel continues to be the platform of choice in high-performance computing, with 408 systems, or nearly 82 percent, powered by Intel processors. More than 90 percent of quad-core-based systems use Intel processors, with the Intel Xeon 5500 series processor nearly doubling its presence with 186 systems. Intel chips also power three systems in the top 10, and four out of five new entrants in the top 30. Seven systems contain the recently announced Intel Xeon 5600 series processor, codenamed "Westmere-EP," and two systems are powered by the new Intel Xeon 7500 series processor, codenamed "Nehalem-EX."
The Intel Xeon processor 5600 series is playing the vital role in the highest-ranked system from China in the history of the Top500. The No. 2 system, located at the National Supercomputing Center (NSCS) in Shenzhen, reached 1.2 petaflops on the Linpack benchmark with a Dawning TC3600. NSCS is a hub for research and innovation in China.
The semi-annual TOP500 list of supercomputers is the work of Hans Meuer of the University of Mannheim, Erich Strohmaier and Horst Simon of the U.S. Department of Energy's National Energy Research Scientific Computing Center, and Jack Dongarra of the University of Tennessee. The complete report is available at www.top500.org.
New Exascale Lab
To meet the growing challenge of running large-scale simulations in the multi petaflops and exaflops range of computing, Intel, Forschungszentrum Julich (FZJ) and ParTec will announce a multi-year commitment to create the ExaCluster Laboratory (ECL) at Julich. The lab will develop key technologies, tools and methods to power multi petaflops and exaflops machines, focusing on the scalability and resilience of those systems. ECL will become the latest member of Intel Labs Europe, a network of research and innovation centers spanning Europe.
A webcast of Kirk Skaugen's International Supercomputing 2010 keynote presentation will be available here: lecture2go.uni-hamburg.de/live.
Source: Intel Corp.
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.