KAUST: Building an HPC Ecosystem

April 7, 2017

April 7, 2017 — The University held the seventh High Performance Computing Saudi Arabia event—the premier regional event in the field—from March 13 to 15. The three-day conference aimed to create a space where researchers and industry representatives could meet, share ideas and experiences and discuss cooperation and collaboration.

The 2017 event focused on coordinated efforts for the advancement of an HPC ecosystem in the Kingdom. The first two days of the event included keynote speeches, invited talks, lightning talks, poster presentations, a vendor exhibition and an open discussion aimed at drafting an action plan for setting up an HPC ecosystem in Saudi Arabia.

Each plenary session commenced with a keynote talk, with speakers including Steven E. Koonin, director, NYU Center for Urban Science and Progress (CUSP)Thomas Schulthess, director, Swiss National Supercomputing Centre (CSCS) at Lugano; and Dr. Robert G. Voigt, a senior member of the technical staff at the Krell Institute.

Collaboration is key

In his welcome address, Dr. Jysoo Lee, director of the KAUST Supercomputing Core Laboratory, praised the people behind the computing research—the people who help create the ecosystems, machinery and technology.

“The research we have and the people we have really makes KAUST special, and the Shaheen system is what we can be proud of,” Lee said. “What we are trying to do is to help and serve both KAUST and the Kingdom. Since you are here in KAUST, I want you to look at the opportunities and what can be done together.”

Jysoo Lee, director of the KAUST Supercomputing Core Lab, speaks during the HPC Saudi event in February of 2017. Source: KAUST

‘There is a science to be done here’

In his opening keynote entitled “Better Cities through Data Acquisition and Analysis,” Koonin highlighted his work and the work of CUSP in the field of urban science and systems. He described how the center uses informatics to study the operations of urban systems, noting how HPC technology enriches the bustling cityscape that is New York City and how it can contribute to broader global issues.

“We need technologies and methodologies to analyze data about cities—there is a science to be done here. Cities have been one of the most complex things that humans have created. Cities are what matter, and by the end of the century, about three-fourths of humanity will be in cities.” Koonin said.

“If you want to change the energy system, technology is great, but the social factor is what you have to work on in the long run. It’s not just about energy, it’s about everything else that happens in a city. You need to understand infrastructure, environment and people to instrument a city,” he continued.

“Cities are built for people by people. You can’t understand a city unless you understand its people. You can try understand one dimension of a city or you can focus on just one city and try discover its various dimensions. One of the biggest challenges is fusing different data sources into usable data. If you can take all of this data and analyze it through data-driven models, you can learn many things. We need to ‘own’ the data by having an intimate familiarly with it,” Koonin added.

How to make HPC mainstream

Merle Giles, formerly of The National Center for Supercomputing Applications (NCSA) and now CEO of Moonshot Research LCC, described how needs differ in research computing. Giles discussed how he harnessed the various methodologies from his previous workplace in his new company.

“For 20 years or more, enterprise has treated HPC as a hobby—what we do in our new company is similar to what we did in NCSA, which is serve others and help others do what they know how to do better,” he said.

“A ‘valley of death’ exists in both the academic and industry sectors and nobody funds the middle, which is innovation. We are left to our own practices to move through this middle ground,” he added. “Some differences between research computing and the commercial side are also the differences between macro and micro economics. There is a big difference between high-level macroeconomics and the company level microeconomics. KAUST is an example of a clustering effect of a macroeconomic policy. The microeconomic effect is down to the level of the firm. I don’t know any boardroom that talks about HPC—HPC has been in the R&D basement forever.”

On tackling the question of how to take HPC mainstream, Giles said, “Reducing time-to-impact is essential, and HPC plays a big part in this. The key to success is being obsessed with the customer. The customer wins in this game.”

“We have to know what goes on in HPC and we have to know about the companies. The HPC community is where we can solve things, and it may be the only way to peek under the hood and know how it works,” he concluded.

‘Taking charge of change’

Raed Al-Rabeh, manager of EXPEC Network Operations at Saudi Aramco, spoke about how there is a complex plethora of new technologies with new disciplines and modes of operations available to all developers, industry and computing researchers. He discussed how by virtue of this, a whole new plane of possibility in HPC is now attainable that was unthinkable a few years ago. Al-Rabeh also discussed the need to adjust to these changes in the HPC landscape and to adapt to avoid the risk of being left behind.

“It’s not about change—it’s about us taking charge of change and making good use of it,” he said. “In HPC, you have to understand the architecture and go to very low levels of understanding to get the most out of the system. You have to be a scientist with a strong background in computer engineering or an electrical engineer to get the most out of it. The HPC challenges are not that different from the IT challenges, but they go to a different level.”

“We need to spot opportunities to make good use of our systems—gone are the days when research was funded just for the sake of research. Research is now funded if it drives new opportunities that are close to home—the industry and the society and where we live, not some theoretical question out there in space. Innovation must happen as a regular process, and agility is critical, “ he added.

“Our customers aren’t interested in becoming computer scientists or experts so they can use products. They expect the products to work. Technology requires resources and the knowledge is not very widespread. We need to spread the knowledge and bring it up-to-speed, and we need to embrace the change and be aware of it to give us the advantage,” he noted.

“We need alignment between business and research, with research doing what business needs. This kind of alignment fuels the research, and then products of the research are deployable and usable. Especially in the Kingdom, very few companies realize the applications of HPC,” Al-Rabeh concluded.

Following on from Al-Rabeh, Sreekanth Pannala from the Saudi Basic Industries Corporation (SABIC) highlighted the role HPC plays in SABIC and how it aids the company’s goals and productivity rates for the Kingdom.

“We look towards our capabilities from a computing perspective—we look at novel solutions from an HPC perspective to make things faster,” Pannala said.

‘We must move forward’

In his keynote talk, Schulthess reflected on the goals and baseline for exascale computing and how a capable exascale computing system requires an entire computational ecosystem behind it.

“It’s amazing to see so many people engaged with HPC in the Middle East. Globally, we have to figure out what we want to accomplish in particular areas. Today, the fastest supercomputers sustain 20 to 100 petaflops on HPL, and investment in software allows mathematical improvements and changes in architecture,” Schulthess said. “I don’t know what that architecture will be in five to 10 years, but we must move forward with it.”

In his presentation, Muhammad El-Rabaa, an associate professor at the Department of Computer Engineering at King Fahd University of Petroleum & Minerals (KFUPM), outlined how new applications have propelled HPC to the forefront of computing.

“New applications have catapulted HPC from narrow scientific applications domain to the mainstream—applications like the cloud, pocket processing, machine learning, searches, analytics, business logic, etc. Computing platforms have continuously evolved with new platforms continuing to emerge,” he said.

He also highlighted the increasing role of field-programmable gate arrays (FPGAs), an integrated circuit that can be configured after manufacturing. “Instead of building one chip, you can now have a few chips, as it is more economical. Several hi-tech executives say that FGPAs will constitute 20 percent of data centers by 2020,” he added.

A fast-moving world

Jeff Brooks, director of supercomputing product management at Cray, discussed the upcoming technology shifts in the marketplace and the implications for systems design in the exascale era.

“Systems with millions of cores will become commonplace. We are trying to invest more in data work, make it work better and scale it out. We want to couple analytics with simulation,” Brooks said. “Another thing that is coming is small, fast memories—systems with millions of cores—will become commonplace. This is a fast-moving world, but by working together you can solve problems you couldn’t do before.”

Delivering scientific solutions

Jeff Nichols, acting director of the National Center for Computational Sciences and the National Leadership Computing Facility at Oak Ridge National Lab (ORNL), discussed the several scientific areas that require an integrated approach and the effort in creating an exascale ecosystem that enables successful delivery of important scientific solutions across a broad range of disciplines.

“We need to think about how we’re being connected to the data that is generated from the sensors all around us. Our Compute and Data Environment for Science (CADES) provides a shared infrastructure to help solve big science problems. We try to connect our data to our in-silico information from the top down.”

“We have to think about the type of data we are actually deploying on these systems. This is a very complicated workflow scenario we have to come up with. We have four pillars which are: application development, software technology, hardware technology, and exascale systems. The Oak Ridge leadership computing facility is on a well-defined path to exascale. We’re interested in our ecosystem delivering important and critical science for the nation and the world,” he said.

Patricia Damkroger, vice president of the Data Center Group at Intel, spoke on the convergence of simulation and data.

“At Intel, we look at the whole ecosystem. There will be new systems and new workloads and we will need to figure out what is the underlying architecture and hardware that makes those systems work. It’s a question of how can we create a common architecture for data and simulation. The world is changing, and without analytics and AI workloads, we will drown in data,” she said.

Educating computational scientists

Voigt opened the final plenary session of the event with his keynote presentation entitled “The Education of Computational Scientists.” His talk centered on providing a historical perspective of the challenges of educating future computational scientists based on his career experiences.

“One might argue that scientific computing began in the 1950s, and in 1982, computational science was recognized. Computational science takes on a discipline of its own, and there is an opportunity to learn about aspects of computational science through exploring multidisciplinary searches,” Voigt said.

“Computational science involves the integration of knowledge and methodologies. There is now an explosion of data and new areas of science and engineering. There are also rapidly changing computer architectures,” he added.

A leading role in HPC

The third day of the conference offered eight tutorials on emerging technical topics of interest, such as advanced performance tuning and optimization offered by Allinea, Intel and Cray; the best practices of HPC procurement by RedOak; and SLURM workload management by SchedMD. The most popular were “HPC 101,” which offered a step-by-step guide on how to use Shaheen II, and NVIDIA’s tutorial on the popular topic of deep learning.

A total of 333 people attended the High Performance Computing Saudi Arabia event, making it one of the biggest conferences held at KAUST.

“The conference was a great chance to observe significant HPC interests in the Kingdom. There were lots of discussions on ways to enhance the HPC ecosystem in the Kingdom, and it was clear that KAUST can play a leading role in several of them,” noted Lee.


Source: David Murphy, KAUST News (link)

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire