Visit additional Tabor Communication Publications
June 18, 2012
SEATTLE, WA and HAMBURG, Germany, June 18 -- At the 2012 International Supercomputing Conference in Hamburg, Germany, global supercomputer leader Cray Inc. today announced that the Company’s next-generation supercomputer code-named “Cascade” will be available with Intel Corporation’s new Intel Xeon Phi coprocessors. The Intel Xeon Phi product family is based on Intel’s Many Integrated Core (Intel MIC) architecture, which is designed for highly parallel workloads.
“The Cascade supercomputer will be the result of the most ambitious R&D program Cray has ever embarked on, and our next-generation system is now made even more compelling with today’s exciting announcement that the Cascade system will be available with the Intel Xeon Phi coprocessors,” said Peg Williams, Cray’s senior vice president of high performance computing systems. “Intel’s MIC architecture features a strong balance of performance, programmability and power efficiency, and combining the Intel Xeon Phi coprocessors with the innovative supercomputing technologies we are incorporating into Cascade, will allow us to provide our HPC customers with a supercomputer that is unmatched for balance, scalability, reliability and price/performance on real-world applications.”
Cray’s next-generation Cascade supercomputer, which is expected to be available in the first half of 2013, is the next step in Cray’s Adaptive Supercomputing vision. The system will feature a continuing evolution of the Cray Linux Environment, Cray’s HPC-optimized programming environment, and the next-generation system interconnect, codenamed “Aries”. Cascade will be able to handle a wide variety of processor types, including Intel® Xeon® processors E5-2600 product family – a first for Cray’s high-end systems – and now the Intel Xeon Phi coprocessor, further extending the flexibility of Cray supercomputers.
“The Intel Xeon Phi coprocessor is optimized to deliver the highest levels of parallel performance, and the combination of Cray’s next-generation Cascade supercomputer paired with Intel processors and coprocessors will provide a powerful resource for HPC users,” said Raj Harza, Intel VP and general manager of Technical Computing Group. “Cray’s Cascade system will feature highly innovative HPC technologies, and we are excited that our collaboration with Cray will enable researchers and scientists to achieve breakthrough innovations and discoveries.”
A number of leading HPC centers have already signed contracts with Cray to purchase Cascade systems. In October 2010, Cray announced it had signed a contract with the University of Stuttgart to provide a Cascade system to the High Performance Computing Center Stuttgart (HLRS) in Germany. In December 2011, Cray announced it was awarded a contract to provide a Cascade supercomputer to the Academic Center for Computing and Media Studies (ACCMS) at Kyoto University in Kyoto, Japan.
The Cascade supercomputer is made possible in part by Cray’s participation in the Defense Advanced Research Project Agency’s (DARPA) High Productivity Computing Systems program.
About Cray Inc.
As a global leader in supercomputing, Cray provides highly advanced supercomputers and world-class services and support to government, industry and academia. Cray technology is designed to enable scientists and engineers to achieve remarkable breakthroughs by accelerating performance, improving efficiency and extending the capabilities of their most demanding applications. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to surpass today’s limitations and meeting the market’s continued demand for realized performance. Go to www.cray.com for more information.
Source: Cray Inc.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.