New Cray OS Brings ISVs in for a Soft Landing
Cray has never made a big deal about the custom Linux operating system it packages with its XT supercomputing line. In general, companies don’t like to tout proprietary OS environments since they tend to lock custom codes in and third-party ISV applications out. But the third generation Cray Linux Environment (CLE3) that the company announced on Wednesday is designed to make elite supercomputing an ISV-friendly experience.
Besides adding compatibility to off-the-shelf ISV codes, which we’ll get to in a moment, the newly-minted Cray OS contains a number of other enhancements. In the performance realm, CLE3 increases overall scalability to greater than 500,000 cores (up from 200,000 in CLE2), adds Lustre 1.8 support, and includes some advanced scheduler features. Cray also added a feature called “core specialization,” which allows the user to pin a single core on the node to the OS and devote the remainder to application code. According to Cray, on some types of codes, this can bump performance 10 to 20 percent. CLE3 also brings with it some additional reliability features, including NodeKARE, a diagnostic capability that makes sure jobs are running on healthy nodes.
But the biggest new feature added to CLE3 is compatibility with standard HPC codes from independent software vendors (ISVs). This new capability has the potential to open up a much broader market for Cray’s flagship XT product line, and further blur the line between proprietary supercomputers and traditional HPC clusters.
Cray has had an on-again off-again relationship with HPC software vendors. Many of the established ISVs in this space grew up alongside Cray Research, and software from companies like CEI, LSTC, SIMULIA, and CD-adapco actually ran on the original Cray Research machines. Over time, these vendors migrated to standard x86 Linux and Windows systems, which became their prime platforms, and dropped products that required customized solutions for supercomputers. Cray left most of the commercial ISVs behind as it focused on high-end HPC and custom applications.
But a couple of years ago, Cray decided it was going to bring the ISVs back into its top-of-the-line supers. The company already had the major pieces in place — an x86 platform in the Opteron-based XT architecture and a SUSE Linux-based OS in CLE. The pieces didn’t quite fit because Cray used an MPI implementation targeted to its proprietary SeaStar system interconnect, while the ISVs employ MPI libraries built atop a standard communication protocol — either TCP/IP or the OpenFabrics Enterprise Distribution (OFED). The only way commercial software (or any software for that matter) would run on an XT machine was by compiling the application code with the Cray libraries. In fact, CD-adapco and LSTC went to the trouble of doing exactly that and ported some of their codes to run on Cray supercomputers. In general, though, ISVs would rather not be bothered to maintain and support multiple distributions of their software for low-volume platforms.
In Cray’s new Linux distribution, Cray has added a TCP/IP layer on top of its SeaStar library to form a bridge to standard Linux codes. That means vanilla ISV applications should literally work out of the box, assuming the software licensing is set up properly. According to Barry Bolding, vice president of Cray’s Scalable Systems division, they have been busy testing codes from all the major vendors — ANSYS, The MathWorks, SIMULIA, CEI, CD-adapco, LSTC, Metacomp Technologies, Accelrys — and have yet to uncover incompatibilities. He says from the application’s point of view, the Cray system software environment now looks like any standard x86 Linux cluster.
Access to the TCP/IP interface is only available in what Cray calls “Cluster Compatibility Mode” (CCM), which represents the ISV-friendly part of CLE3. The default environment is Cray’s “standard” runtime, which they now refer to as “Extreme Scalability Mode.” The idea is that as ISV-derived jobs are queued up for execution, the appropriate nodes are loaded with CCM, and then subsequently reprovisioned with ESM after the application completes. The OS footprint for the two modes is nearly identical, with the CCM version about 45 MB larger than its ESM sibling.
In the initial version of CLE3, the size of a CCM job is limited to 2,048 cores. Bolding says that’s because they don’t think they’ll be able to achieve any more scalability than that with the TCP/IP implementation. Of course, multiple CCM apps could be running simultaneously. So, for example, an Abaqus CAE job could be running on 100 nodes, a CEI EnSight one on another 50, MATLAB on 20 more, and so on.
Bolding claims that the performance they’ve achieved from TCP/IP on top of SeaStar is close to what you could get out of an InfiniBand-based cluster. The upcoming “Baker” system will incorporate the faster “Gemini” interconnect, so they expect a significant performance gain just from the new hardware. In addition, next year Cray plans to offer an OFED communication stack on top of its interconnect, which should boost performance even further. Bolding is confident the Gemini-OFED combo will outrun InfiniBand in any benchmark.
With the initial CLE3 release, the company can now target customers who need the XT for their own scalable custom codes, but who wouldn’t have purchased a system because they wanted to run ISV codes in tandem. How big of a market that represents is anyone’s guess, but Cray will soon find out. Next year, with the optimized Gemini/OFED communication, the company can sell Bakers to customers that only have ISV apps to run, but can pay a premium for better performance.
CLE3 will be released on the various XT platforms in stages. The initial version will be included with the currently-shipping XT6 and XT6m machines, with plans to make it available for the XT5 and XT5m systems sometime later in the year. CLE3 will also be packaged with the Baker supers from the start. Those systems are expected to start shipping in the third quarter of 2010.
Feeds by Topic
- Developer Tools
Feeds by Industry
October 26, 2016
- Penguin Computing Announces Open Compute-Based Deep Learning Solution with NVIDIA Tesla GPUs
- Supermicro Introduces NVIDIA Pascal GPU-Enabled Server Solutions Featuring Tesla P100 GPUs
- Hispanic IT Executive Council Names SGI CEO Jorge Titinger to 2016 HITEC 100
- Barcelona Supercomputing Center Announces Collaboration With OpenFog Consortium
- Mellanox Begins Shipments of ConnectX-5 Adapter to Leading Server and Storage OEMs
- Super-Cool Quantum Research Lab Heads to Space
October 25, 2016
- Fujitsu Adopts Cadence Palladium Z1 Enterprise Emulation Platform for Post-K Supercomputer Development
- ASML Taps LLNL to Develop Extreme EUV for Chip Manufacturing
- Kitware Plans to Spotlight New VTK and ParaView Releases at SC16
- Seagate Launches New Enterprise HDD
- SES Institutions Awarded £12 Million to Develop University Tier 2, HPC Facilities
October 24, 2016
- Tabor Communications Launches New Corporate Site, Signals Publishing Platform Enhancements Ahead
- Barcelona Supercomputing Center Joins the OpenPOWER Foundation
- SC16 Releases Latest Invited Talk Spotlight: Charlie Catlett
- Intel Capital Announces $38M of New Investments
- Bright Computing to Present on Ceph at OpenStack Summit, Barcelona
October 21, 2016
- National Cancer Institute’s Dr. Warren Kibbe Joins HPC Matters Plenary Panel on Precision Medicine
- ORNL Receives Emu Chick Memory Server From Emu Technology
- SC16 Releases Latest Invited Talk Spotlight: Dr. Thomas Schulthess
- AMD Reports 2016 Third Quarter Results
Most Read Features
- AWS Beats Azure to K80 General Availability
- Vectors: How the Old Became New Again in Supercomputing
- Dell EMC Engineers Strategy to Democratize HPC
- Container App ‘Singularity’ Eases Scientific Computing
- IBM Advances Neuromorphic Computing for Deep Learning
- Power8 with NVLink Coming to the Nimbix Cloud
- OpenCAPI Takes on PCIe, Vows 10X Improvement
- Gen-Z Consortium Puts New High Performance Interconnect in Motion
- Beyond von Neumann, Neuromorphic Computing Steadily Advances
- ORNL’s Future Technologies Group Tackles Memory and More
- More Features…
Most Read Short Takes
- Intel’s FPGAs Target Datacenters, Networking
- CPU Benchmarking: Haswell Versus POWER8
- Cray KNL-Based XC40 Shines on STAC–A2 Benchmark
- SGI, ANSYS Set New Record for Scaling Commercial CAE Code
- The Exascale Computing Project Awards $39.8M to 22 Projects
- NSF Backs ‘Big Data Spokes’ with $10M in Grants
- Microsoft Eyes AI Supercomputer on Azure
- Researchers Shrink Transistor Gate to One Nanometer
- HPC Career Notes (Oct. 2016)
- Micron, Intel Prepare to Launch 3D XPoint Memory
- More Short Takes…
Most Read Off The Wire
- NYU Using NVIDIA DGX-1 Supercomputer to Push Boundaries of AI
- Nimbix Collaborates With IBM and NVIDIA to Launch Powerful GPU Cloud Offering
- ARM Releases New Interconnect Technology
- SGI Introduces Scale-Out Solution for SAP HANA
- CSIRO Deploys APAC’s First NVIDIA DGX-1 Deep Learning Supercomputers
- NVIDIA Introduces Xavier
- UK Met Office Installs New HPC System With Help From Bright Computing, SGI and DDN
- ALCF Summer Student Projects Tackle Real-World Problems
- SAIC Awarded $575M Contract to Provide Support Services to the HPCMP
- CSRA Expands Supercomputing Cluster for NIH Research
- More Off The Wire…
- Read more…
- Read more…
- Read more…
With the heady performance threshold that is exascale in sight, and the power, memory and concurrency challenges well-documented, no element of the hardware/software stack is free from scrutiny, including the operating system. Read more…
Internet2 (I2) issued a brief announcement yesterday that Google had joined its community. Read more…
Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…
Between the demands of the data deluge and hardware advancements in both CPUs and GPUs alike, it’s no surprise that large HPC clusters are seeing rapid growth as a part of today’s Big Data escalation. Read more…
Storage challenges in the data deluge are nothing new, but as we prepare for unprecedented information overload, the demand innovative storage strategies is at an all-time high. Read more…
Creating the right technology environment is a time-consuming task for researchers who want to focus on science (not servers or long wait times at supercomputing centers). Read more…
The Van Andel Institute (VAI) worked with Silicon Mechanics to not only provide its users a more powerful platform, but a hybrid OpenStack HPC solution with the flexibility to support VAI’s commitment to improve the health and change the lives of current and future generations. Read more…
HPC Job Bank
November 1 - November 2Jakarta Indonesia
November 7 - November 9San Francisco CA United States
November 13 - November 17Salt Lake City UT United States
December 6 - December 7San Jose CA United States
March 15, 2017 - March 16, 2017Houston TX United States