Visit additional Tabor Communication Publications
May 31, 2010
SANTA CLARA, Calif., and HAMBURG, Germany, May 31 -- During the International Supercomputing Conference (ISC), Intel Corporation announced plans to deliver new products based on the Intel Many Integrated Core (MIC) architecture that will create platforms running at trillions of calculations per second, while also retaining the benefits of standard Intel processors.
Targeting high-performance computing segments such as exploration, scientific research and financial or climate simulation, the first product, codenamed "Knights Corner," will be made on Intel's 22-nanometer manufacturing (nm) process -- using transistor structures as small as 22 billionths of a meter -- and will use Moore's Law to scale to more than 50 Intel processing cores on a single chip. While the vast majority of workloads will still run best on award-winning Intel Xeon processors, Intel MIC architecture will help accelerate select highly parallel applications.
Industry design and development kits codenamed "Knights Ferry" are currently shipping to select developers, and beginning in the second half of 2010, Intel will expand the program to deliver an extensive range of developer tools for Intel MIC architecture. Common Intel software tools and optimization techniques between Intel MIC architecture and Intel Xeon processors will support diverse programming models that will place unprecedented performance in the hands of scientists, researchers and engineers, allowing them to increase their pace of discovery and preserve their existing software investments. The Intel MIC architecture is derived from several Intel projects, including "Larrabee" and such Intel Labs research projects as the Single-chip Cloud Computer.
"The CERN openlab team was able to migrate a complex C++ parallel benchmark to the Intel MIC software development platform in just a few days," said Sverre Jarp, CTO of CERN openlab. "The familiar hardware programming model allowed us to get the software running much faster than expected."
"Intel's Xeon processors, and now our new Intel Many Integrated Core architecture products, will further push the boundaries of science and discovery as Intel accelerates solutions to some of humanity's most challenging problems," said Kirk Skaugen, vice president and general manager of Intel's Data Center Group. "The Intel MIC architecture will extend Intel's leading HPC products and solutions that are already in nearly 82 percent of the world's top supercomputers. Today's investments are indicative of Intel's growing commitment to the global HPC community."
The 35th edition of the TOP500 list, which was announced at ISC, shows that Intel continues to be the platform of choice in high-performance computing, with 408 systems, or nearly 82 percent, powered by Intel processors. More than 90 percent of quad-core-based systems use Intel processors, with the Intel Xeon 5500 series processor nearly doubling its presence with 186 systems. Intel chips also power three systems in the top 10, and four out of five new entrants in the top 30. Seven systems contain the recently announced Intel Xeon 5600 series processor, codenamed "Westmere-EP," and two systems are powered by the new Intel Xeon 7500 series processor, codenamed "Nehalem-EX."
The Intel Xeon processor 5600 series is playing the vital role in the highest-ranked system from China in the history of the Top500. The No. 2 system, located at the National Supercomputing Center (NSCS) in Shenzhen, reached 1.2 petaflops on the Linpack benchmark with a Dawning TC3600. NSCS is a hub for research and innovation in China.
The semi-annual TOP500 list of supercomputers is the work of Hans Meuer of the University of Mannheim, Erich Strohmaier and Horst Simon of the U.S. Department of Energy's National Energy Research Scientific Computing Center, and Jack Dongarra of the University of Tennessee. The complete report is available at www.top500.org.
New Exascale Lab
To meet the growing challenge of running large-scale simulations in the multi petaflops and exaflops range of computing, Intel, Forschungszentrum Julich (FZJ) and ParTec will announce a multi-year commitment to create the ExaCluster Laboratory (ECL) at Julich. The lab will develop key technologies, tools and methods to power multi petaflops and exaflops machines, focusing on the scalability and resilience of those systems. ECL will become the latest member of Intel Labs Europe, a network of research and innovation centers spanning Europe.
A webcast of Kirk Skaugen's International Supercomputing 2010 keynote presentation will be available here: lecture2go.uni-hamburg.de/live.
Source: Intel Corp.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.