Visit additional Tabor Communication Publications
October 29, 2012
Seattle, WA, Oct. 29 – Global supercomputer leader Cray Inc. today announced the launch of the Company’s new series of production hybrid supercomputers – the Cray XK7 system – in conjunction with today’s debut of the Cray XK7 supercomputer nicknamed “Titan” located at the Department of Energy’s Oak Ridge National Laboratory (ORNL). Titan is capable of more than 20 petaflops of high performance computing (HPC) power and is the world’s most powerful supercomputer for open science.
The Titan system is a 200-cabinet Cray XK7 supercomputer with 18,688 compute nodes each consisting of a16-Core AMD Opteron 6200 Series processor and an NVIDIA Tesla K20 GPU Accelerator. Titan was upgraded from a Cray XT5 supercomputer nicknamed “Jaguar.”
The transformation from Jaguar to Titan is another significant milestone in the collaborative partnership between Cray and ORNL that has produced groundbreaking HPC accomplishments. In 2008, Jaguar set a world record for computer speed with sustained performance of more than one petaflops on two scientific applications, and the system subsequently passed that threshold a total of five times on real-world applications. In 2009, Jaguar claimed the number one spot on the list of the fastest supercomputers in the world. In October 2011, Cray announced it had received a contract to upgrade Jaguar to Titan and equip the system with NVIDIA Tesla 20-series GPUs; and today, the Cray XK7 system made its debut.
“Today’s unveiling of the Titan supercomputer is an exciting moment for Oak Ridge and the Department of Energy’s Office of Science, and while the system is currently going through the acceptance process, all of us at Cray share in the enthusiasm that surrounds this amazing tool for open science,” said Peter Ungaro, president and CEO of Cray. “The Titan supercomputer is an incredibly powerful Cray XK7 system combining innovative technologies from companies such as AMD and NVIDIA, surrounded by a tightly-integrated Cray hardware and software infrastructure. With today’s launch of the Cray XK7, we can now offer our customers the same technologies found in one of the most powerful supercomputers in the world.”
The Cray XK7 system features the latest production hybrid supercomputing technologies. By combining the features of the proven high performance Gemini interconnect, the new NVIDIA Tesla K20 GPUs and the 16-core AMD Opteron processors, the Cray XK7 system is capable of scaling to more than 50 petaflops of performance.
The Cray XK7 supercomputer also features a unified CPU/GPU programming environment that provides users with validated tools, libraries, compilers and third-party software, fully integrated with the system’s hardware. When combined with the Cray Linux Environment, the result is a hybrid supercomputer that blends scalable hardware, software and network. Cray XK7 customers will be able to utilize the capabilities of a multi-purpose supercomputer designed for the next-generation of many-core, HPC applications.
Upgradeable from Cray XT4, Cray XT5, Cray XT6, Cray XE6 or Cray XK6 systems, the Cray XK7 supercomputer is available now. The system can be configured in a single cabinet with tens of compute nodes, to a multi-cabinet system with tens of thousands of compute nodes.
Additional information on the Cray XK7 supercomputer, including a brochure and technical details, can be found on the Cray XK7 system page on the Cray website.
Titan is currently going through the system acceptance process. Cray will not recognize the remaining revenue associated with this system until it has been accepted, and the timing of such acceptance remains uncertain.
About Cray Inc.
As a global leader in supercomputing, Cray provides highly advanced supercomputers and world-class services and support to government, industry and academia. Cray technology is designed to enable scientists and engineers to achieve remarkable breakthroughs by accelerating performance, improving efficiency and extending the capabilities of their most demanding applications. Cray’s Adaptive Supercomputing vision is focused on delivering innovative next-generation products that integrate diverse processing technologies into a unified architecture, allowing customers to surpass today’s limitations and meeting the market’s continued demand for realized performance.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.