The Weekly Top Five

By Tiffany Trader

February 24, 2011

The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover Cray’s first XMT-2 supercomputer order, University of Delaware researchers’ extreme-scale architecture breakthrough, AMD’s OpenCL University Kit, Platform’s Grid Engine migration program, and PGI’s 2011 product refresh.

CSCS First to Order Cray XMT-2 Supercomputer

Cray has received its first order for a supercomputer based on its next-generation XMT architecture. The contract was awarded by the Swiss National Supercomputing Centre (CSCS) in Manno, Switzerland, and the announcement was made to coincide with a CSCS-hosted workshop focused on large-scale data analysis. Fitting since that’s exactly the kind of workload CSCS has planned for the system.

CSCS is no stranger to Cray systems. The organization was the recipient of the first-ever Cray XE6 system and is also home to a Cray XT5 supercompter, referred to as “Rosa.” The upcoming addition, expected to arrive later this year, will be part of a new project at CSCS called EUREKA, which matches Swiss scientists with dedicated resources for large-scale data analysis services. According to the release, “the proposed facility will be used for large-scale analysis of unstructured data and data mining, and is designed for parallel applications that are dynamically changing, require random access to shared memory and typically do not run well on conventional systems.”

The Cray XMT supercomputer features a massive, multithreaded architecture to support “data-driven problems that exist in unrelated and diverse data sets.” Each processor can handle up to 128 concurrent threads and the system can scale from 16 processors up to multiple thousands of processors.

University of Delaware Researchers Hope to Redesign Supercomputer

Guang Gao and a team of researchers at the University of Delaware are working to achieve breakthroughs in supercomputing that they hope will lead to a new generation of systems. The group is focused on improving the speed, efficiency and computational capacity of extreme-scale systems.

Gao, a distinguished professor of Electrical and Computer Engineering, is an expert in computer architecture and parallel systems. He and his team are taking part in a research and development initiative put forth by the Defense Advanced Research Projects Agency (DARPA) “to create an innovative, revolutionary new generation of computing systems” under DARPA’s recently-announced Ubiquitous High Performance Computing (UHPC) program. The University of Delaware researchers are members of the Intel Corporation UHPC team, which is focused on creating the next-generation of hardware and software technologies for extreme-scale computing systems. Other members of the Intel team are based at the University of Illinois at Urbana Champaign, the University of California at San Diego, Reservoir Labs Inc. and E.T. International, Inc. (ETI).

Gao comments on the significance of the undertaking: “This is a very important event for the nation. This project will develop a supercomputer that puts the United States ahead of our competitors. But with that comes a lot of responsibility.”

The project participants understand the need to develop a different kind of architecture, to enable a true breakthrough in parallelism instead of just stringing more and more cores together. To that end, the announcement states that UHPC program recognizes that “a new model of computation or an execution model must be developed that enables the programmer to perceive the system as a unified and naturally parallel computer system, not as a collection of microprocessors and an interconnection network.”

Such a redesign is paramount to our nation’s economic and military competitiveness. DARPA, a department of defense group, understands that very well. It is thought that this “radically new” architecture will allow applications to perform 100 to 1,000 times better than current models. Another goal is to enable parallel software design, to make it less difficult.

Prototypes of these UHPC systems are scheduled to be ready by 2018.

AMD Launches OpenCL University Kit

This week AMD introduced the OpenCL University Kit to assist universities in teaching a semester course in OpenCL programming. OpenCL (Open Computing Language) is an open standard for parallel programming of heterogeneous platforms including GPUs, multicore CPUs and other processors.

From the announcement:

This effort underscores AMD’s commitment to the educational community, which currently includes a number of strategic research initiatives, to enable the next generation of software developers and programmers with the knowledge needed to lead the era of heterogeneous computing. OpenCL, the only non-proprietary industry standard available today for true heterogeneous computing, helps developers to harness the full compute power of both the CPU and GPU to create innovative applications for vivid computing experiences.

The University Kit includes a 13 lecture series complete with instructor and speaker notes, and code examples. Course participants need not already be proficient in OpenCL programming, however basic knowledge of C/C++ programming is recommended. Students will need a C/C++ compiler and an OpenCL implementation, such as the AMD APP SDK, to complete the exercises. For additional information, including a listing of educational institutions now offering courses in OpenCL programming, click here.

AMD also announced that it will be holding its first AMD Fusion Developer Summit from June 13-16 in Seattle, Washington.

Platform Offers Migration Path for Grid Engine Users

Platform Computing has released a migration program for Grid Engine users aimed at easing the transition to one of Platform’s workload management systems, either Platform HPC, an HPC cluster solution, or Platform LSF, a comprehensive HPC workload management platform.

Presumably, Platform developed the migration tool in response to Oracle’s December announcement that it was discontinuing support for the open source version of Grid Engine and was also shutting down the CollabNet site (gridengine.sunsource.net), and the subsequent exodus of Grid Engine’s expertise to Univa. Those affected by the news had to decide whether to take their chances with the open source version, purchase Univa’s commercial Grid Engine offering, utilize one of the Grid Engine forks, or migrate to another workload manager altogether.

Platform officials state that while there are multiple Grid Engine paths, there is only one Platform LSF, and it has retained backwards compatibility for over 18 years. They describe their solutions as offering easy-to-use management capabilities, such as cluster provisioning, workload management, automated workflow, and monitoring and analysis, accessible through a unified Web interface. Platform LSF and Platform HPC also include application integration templates for ISV applications.

Chris Collins, head of Research Computing Services, University of East Anglia, commented on the university’s experience with the migration tool:

We are very pleased with the results of our decision to partner with Viglen to migrate to Platform Computing. The robust capabilities in Platform HPC will enable us to lower power consumption and increase collaboration between different departments in the University. The Windows/Linux dual boot functionality will help make HPC more accessible to other researchers, who are not traditionally HPC/Linux users. In addition, the easy-to-use interface will make it simpler to capture metrics such as resource usage per user, helping ensure that we are achieving optimal resource utilization as well as facilitating accurate billing for system resource usage.

PGI Updates Compilers, Development Tools

The Portland Group (PGI) has released PGI 2011, its latest line of high-performance parallelizing compilers and development tools for Linux, Mac OS X and Windows. This is the first general release to provide full support for the PGI Accelerator programming model 1.2 specification on x64 processor-based systems incorporating NVIDIA CUDA GPUs. The new PGI release offers several other enhancements for multicore x64 processor-based HPC systems.

The latest Intel and AMD microprocessors will be supported, as outlined in the following text:

New features and enhancements include support for the new Advanced Vector Extensions to the x64 instruction set architecture (AVX) in upcoming Intel Sandy Bridge and AMD Bulldozer CPUs, support for the Fortran 2003 language standard, enhancements in C++ performance through default fast exception handling and improved Boost C++ libraries support, OpenMP nested parallelism, new memory-hierarchy optimizations, debugger improvements including compact parallel register displays and tab-based sub-windows, and performance profiler enhancements to simplify browsing of multi-core profiles. The 2011 release also supports GPU performance profiling and benefits from revamped packaging for faster download and installation.

PGI has been working with NVIDIA on integrating CUDA support into their tools. Sanford Russell, director of CUDA marketing at NVIDIA, comments on the partnership:

The continuing evolution of the PGI compilers to support the CUDA parallel architecture ensures that applications developed by more than 100,000 CUDA developers worldwide can be portable to all types of HPC systems. This trend will clearly continue with the upcoming release of the CUDA-x86 compiler, enabling developers to compile and optimize their CUDA applications to run on x86-based systems.

Planned updates for the PGI 2011 software due out this year will include a PGI CUDA C/C++ compiler that allows developers to port CUDA programs to any multicore x64 processor-based system with or without NVIDIA GPU accelerators.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AWS Embraces FPGAs, ‘Elastic’ GPUs

December 2, 2016

A new instance type rolled out this week by Amazon Web Services is based on customizable field programmable gate arrays that promise to strike a balance between performance and cost as emerging workloads create requirements often unmet by general-purpose processors. Read more…

By George Leopold

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Weekly Twitter Roundup (Dec. 1, 2016)

December 1, 2016

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPC Career Notes (Dec. 2016)

December 1, 2016

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high performance computing community. Read more…

By Thomas Ayres

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

IBM and NSF Computing Pioneer Erich Bloch Dies at 91

November 30, 2016

Erich Bloch, a computational pioneer whose competitive zeal and commercial bent helped transform the National Science Foundation while he was its director, died last Friday at age 91. Bloch was a productive force to be reckoned. During his long stint at IBM prior to joining NSF Bloch spearheaded development of the “Stretch” supercomputer and IBM’s phenomenally successful System/360. Read more…

By John Russell

Pioneering Programmers Awarded Presidential Medal of Freedom

November 30, 2016

In an awards ceremony on November 22, President Barack Obama recognized 21 recipients with the Presidential Medal of Freedom, the Nation’s highest civilian honor. Read more…

By Tiffany Trader

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

HPE-SGI to Tackle Exascale and Enterprise Targets

November 22, 2016

At first blush, and maybe second blush too, Hewlett Packard Enterprise’s (HPE) purchase of SGI seems like an unambiguous win-win. SGI’s advanced shared memory technology, its popular UV product line (Hanna), deep vertical market expertise, and services-led go-to-market capability all give HPE a leg up in its drive to remake itself. Bear in mind HPE came into existence just a year ago with the split of Hewlett-Packard. The computer landscape, including HPC, is shifting with still unclear consequences. One wonders who’s next on the deal block following Dell’s recent merger with EMC. Read more…

By John Russell

Intel Details AI Hardware Strategy for Post-GPU Age

November 21, 2016

Last week at SC16, Intel revealed its product roadmap for embedding its processors with key capabilities and attributes needed to take artificial intelligence (AI) to the next level. Read more…

By Alex Woodie

SC Says Farewell to Salt Lake City, See You in Denver

November 18, 2016

After an intense four-day flurry of activity (and a cold snap that brought some actual snow flurries), the SC16 show floor closed yesterday (Thursday) and the always-extensive technical program wound down today. Read more…

By Tiffany Trader

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Why 2016 Is the Most Important Year in HPC in Over Two Decades

August 23, 2016

In 1994, two NASA employees connected 16 commodity workstations together using a standard Ethernet LAN and installed open-source message passing software that allowed their number-crunching scientific application to run on the whole “cluster” of machines as if it were a single entity. Read more…

By Vincent Natoli, Stone Ridge Technology

IBM Advances Against x86 with Power9

August 30, 2016

After offering OpenPower Summit attendees a limited preview in April, IBM is unveiling further details of its next-gen CPU, Power9, which the tech mainstay is counting on to regain market share ceded to rival Intel. Read more…

By Tiffany Trader

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

Think Fast – Is Neuromorphic Computing Set to Leap Forward?

August 15, 2016

Steadily advancing neuromorphic computing technology has created high expectations for this fundamentally different approach to computing. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

HPE Gobbles SGI for Larger Slice of $11B HPC Pie

August 11, 2016

Hewlett Packard Enterprise (HPE) announced today that it will acquire rival HPC server maker SGI for $7.75 per share, or about $275 million, inclusive of cash and debt. The deal ends the seven-year reprieve that kept the SGI banner flying after Rackable Systems purchased the bankrupt Silicon Graphics Inc. for $25 million in 2009 and assumed the SGI brand. Bringing SGI into its fold bolsters HPE's high-performance computing and data analytics capabilities and expands its position... Read more…

By Tiffany Trader

ARM Unveils Scalable Vector Extension for HPC at Hot Chips

August 22, 2016

ARM and Fujitsu today announced a scalable vector extension (SVE) to the ARMv8-A architecture intended to enhance ARM capabilities in HPC workloads. Fujitsu is the lead silicon partner in the effort (so far) and will use ARM with SVE technology in its post K computer, Japan’s next flagship supercomputer planned for the 2020 timeframe. This is an important incremental step for ARM, which seeks to push more aggressively into mainstream and HPC server markets. Read more…

By John Russell

IBM Debuts Power8 Chip with NVLink and Three New Systems

September 8, 2016

Not long after revealing more details about its next-gen Power9 chip due in 2017, IBM today rolled out three new Power8-based Linux servers and a new version of its Power8 chip featuring Nvidia’s NVLink interconnect. Read more…

By John Russell

Leading Solution Providers

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Intel Launches Silicon Photonics Chip, Previews Next-Gen Phi for AI

August 18, 2016

At the Intel Developer Forum, held in San Francisco this week, Intel Senior Vice President and General Manager Diane Bryant announced the launch of Intel's Silicon Photonics product line and teased a brand-new Phi product, codenamed "Knights Mill," aimed at machine learning workloads. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Micron, Intel Prepare to Launch 3D XPoint Memory

August 16, 2016

Micron Technology used last week’s Flash Memory Summit to roll out its new line of 3D XPoint memory technology jointly developed with Intel while demonstrating the technology in solid-state drives. Micron claimed its Quantx line delivers PCI Express (PCIe) SSD performance with read latencies at less than 10 microseconds and writes at less than 20 microseconds. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Share This