First Round of 2015 Hackathons Gets Underway

May 29, 2015

May 29 — What good would a supercomputer be without its applications? As supercomputers inevitably scale to newer, more diversified architectures, so too must their applications.

Building on the lauded success of last year’s 2014 inaugural hackathon, organizers from the Oak Ridge Leadership Computing Facility (OLCF), the University of Illinois Coordinated Science Laboratory and National Center for Supercomputing Applications (NCSA), NVIDIA, and The Portland Group (PGI) gathered in April at NCSA on the university’s Urbana-Champaign campus for the next round of applications acceleration programming.

“The goal of the events is for mentors to help the teams prepare their next-generation applications for the next generation of heterogeneous supercomputers and for the teams to help mentors to gain insights on how to improve their tools and methods,” said Wen-mei Hwu, co-principal investigator for Blue Waters supercomputer.

The event’s approximately 40 attendees were composed of four science teams with important applications—SIMULOCEAN, Nek5000, VWM, and PowerGrid. Also present were mentors: individuals from NVIDIA, PGI, and Mentor Graphics as well as a group led by Hwu from the CUDA Center of Excellence at the University of Illinois. In addition to access to mentors, each team was given accounts on NCSA’s JYC test system and Blue Waters supercomputer; NVIDIA’s internal cluster; and Titan and Chester (a Cray XK7 supercomputer and a single cabinet Cray XK7, respectively) at the OLCF, a US Department of Energy (DOE) Office of Science User Facility at DOE’s Oak Ridge National Laboratory.

Teams spent the first four days learning various methods to port their codes to GPUs. During the midway point of each day, “scrum sessions” allowed programmers to take a break from coding, deliver progress reports, and receive feedback from the mentors and the other teams.

Teams focused primarily on five areas of learning:

  • Programming methods for OpenACC
  • Access procedures for GPU accelerated libraries, or library calls
  • Code profiling techniques
  • Program optimization
  • Data transfer techniques

Having access to five different systems allowed participants to put those methods into practice and to analyze, compare, and contrast alternate methods of compiling to see which methods worked the best and on which systems they were most successful. At the end of the four days, each team came away with significant achievements.

“The summary of the successes of each team is that they now have a path forward,” said OLCF’s event organizer Fernanda Foertter. “Also, they learned OpenACC. In some cases, some of them had little to no experience with it.”

One team in particular made remarkable progress. The team working with PowerGrid—an advanced model for MRI reconstruction—used OpenACC for the first time and was able to solve its problem 70 times faster than it could have solved the problem using a workstation. Team members did so by introducing both GPU and parallel node implementations. In one of their runs, they were able to reconstruct 3,000 brain images with an advanced imaging model in just under 24 hours by using many of the Blue Waters GPU nodes simultaneously—a task that would have taken months without OpenACC.

“In the past we had to approximate our calculations because they were so computationally intensive,” said PowerGrid’s principal investigator Brad Sutton. “In certain situations these approximations have a negative impact on image quality; now we can achieve accurate solutions using OpenACC to maximize image quality and performance.”

PowerGrid team member Alex Cerjanic added, “The connections we made with our mentors and their organizations not only enabled us to reach our goals that week but also connected us to resources that we can use to continue moving our research forward.”

Another contributing factor to each of the teams’ successes was the PGI OpenACC compiler. Taking into account lessons learned from the previous hackathon, the latest compiler release delivers enhanced usability and performance and includes a number of features and fixes implemented as a direct result of feedback from the 2014 event.

On the last day, all four teams delivered presentations to the attendees, detailing their overall experiences and achievements. Foertter presented the final results of the event a week later in Chicago at the 2015 Cray User Group Conference.

The success of the OLCF’s inaugural hackathon last year has resonated well within the HPC community, and by all accounts the NCSA event carried it even further—but it is not going to stop there. April’s event marked the first of three hackathons scheduled in 2015; the next one will take place starting July 10 in Switzerland at the Swiss National Supercomputing Centre (CSCS).

Evidenced in the collaborative endeavors of the OLCF and the NCSA—in addition to the CSCS event right around the corner—hackathons are proving to be a powerful approach that demonstrates the benefits of strengthening relationships between centers with heterogeneous architectures. And undoubtedly those relationships will be even more beneficial in the future as programmers around the world continue to do their part in advancing the mission of science.

“Hackathons are an effective way for science and engineering teams to quickly migrate from traditional serial programming to many-core programming at scale,” said Blue Waters director Bill Kramer. “The intense sessions, with weeklong dedicated efforts by the teams, vendors, and experts, help to quickly enable teams to reengineer their codes for the future, something that could take months or more without hackathons. The progress of these four teams will enable them to do more and better research in the future.”

Source: Jeremy Rumsey, OLCF

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from its predecessors, including the red-hot H100 and A100 GPUs. Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. While Nvidia may not spring to mind when thinking of the quant Read more…

2024 Winter Classic: Meet the HPE Mentors

March 18, 2024

The latest installment of the 2024 Winter Classic Studio Update Show features our interview with the HPE mentor team who introduced our student teams to the joys (and potential sorrows) of the HPL (LINPACK) and accompany Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the field was normalized for boys in 1969 when the Apollo 11 missi Read more…

Apple Buys DarwinAI Deepening its AI Push According to Report

March 14, 2024

Apple has purchased Canadian AI startup DarwinAI according to a Bloomberg report today. Apparently the deal was done early this year but still hasn’t been publicly announced according to the report. Apple is preparing Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimization algorithms to iteratively refine their parameters until Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimizat Read more…

PASQAL Issues Roadmap to 10,000 Qubits in 2026 and Fault Tolerance in 2028

March 13, 2024

Paris-based PASQAL, a developer of neutral atom-based quantum computers, yesterday issued a roadmap for delivering systems with 10,000 physical qubits in 2026 a Read more…

India Is an AI Powerhouse Waiting to Happen, but Challenges Await

March 12, 2024

The Indian government is pushing full speed ahead to make the country an attractive technology base, especially in the hot fields of AI and semiconductors, but Read more…

Charles Tahan Exits National Quantum Coordination Office

March 12, 2024

(March 1, 2024) My first official day at the White House Office of Science and Technology Policy (OSTP) was June 15, 2020, during the depths of the COVID-19 loc Read more…

AI Bias In the Spotlight On International Women’s Day

March 11, 2024

What impact does AI bias have on women and girls? What can people do to increase female participation in the AI field? These are some of the questions the tech Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Analyst Panel Says Take the Quantum Computing Plunge Now…

November 27, 2023

Should you start exploring quantum computing? Yes, said a panel of analysts convened at Tabor Communications HPC and AI on Wall Street conference earlier this y Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Training of 1-Trillion Parameter Scientific AI Begins

November 13, 2023

A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers. Argonne N Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire