New RISC-V High-performance Chips and Software Research Detailed

By Agam Shah

August 14, 2024

Many efforts are underway to make RISC-V production ready for servers and supercomputing, though the architecture is still years away from viability. China and Europe detailed new high-performance chips, and the EU is building an experimental RISC-V cloud computing environment built on open-source software. Separately, researchers are testing new RISC-V chips, including Tenstorrent’s Grayskull.

RISC-V is an alternative to the x86 and ARM architectures, which dominate the server market. Although RISC-V is years away from being a practical choice for servers and high-performance computers, academic and research institutions are bridging the gap to make that a reality.

The momentum behind RISC-V is undeniable. RISC-V is tied to strategic national interests of EU, Russia, and China, which want to build sovereign chips around the architectture.

RISC-V helps countries build their own destiny in semiconductor technology. The ISA is free to license, has an open design, and isn’t ruled by national interests. The U.S. is weaponizing its chip and AI technologies to choke off China’s access to CPU and GPU technologies.

RISC-V Open ISA Processor Prototype. (Photo By Derrick Coetzee: https://commons.wikimedia.org/w/index.php?curid=25845306)

A host of Chinese organizations working together this year plan to release the open-source Xiangshan K100 CPU, which runs at 3GHz. It is a high-performance chip, and China claims performance advantages over some ARM server processors, but take this with a pinch of salt.

Chinese institutions started developing the XiangShan family of chips in 2020.

The K100 chip design is open source, meaning anyone can take up the design. China is a member of RISC-V, though members of the U.S. Congress want to investigate the country’s participation in RISC-V International, the standards-setting organization for the ISA.

Researchers from Europe and the U.S. also published a paper detailing a 432-core RISC-V chip called Occamy, which has HBM2e memory, a chiplet design, and is made on the 12-nm process.

Faster RISC-V chips are becoming available, but more work is needed in software and hardware to drive adoption of the architecture in high-performance computing, said Nick Brown, senior research fellow at the University of Edinburgh, in a paper.

“In recent years, we have seen closer integration between GPUs and CPUs in HPC by the provision of a unified memory space, with obvious benefits, and RISC-V provides the potential to push this a step further by unifying the ISA and programming model,” Brown said.

He pointed out that companies such as Esperanto, Sophon, and Tenstorrent released many server chips, and more progress is expected in 2024 and beyond.

EU-backed institutions are picking up the slack in software efforts related to RISC-V. The European Union is funding an effort called Vitamin-V, which aims to port the software necessary for RISC-V to cloud environments.

The researchers want to create an equivalent software toolchain to match ARM and x86 deployments in the cloud.

“Vitamin-V will deliver a complete build toolchain based on LLVM. Apart from more conventional, already supported HLLs (High-Level Languages), we will add support for GO, Python3, and Rust,” researchers said in a paper.

The cloud development will revolve around developing Kubernetes, Docker, and OpenStack. Researchers in the project are already developing OpenStack on a RISC-V server with a cluster of Sipeed’s Lichee PI 4A development boards, which include the TH1520 RISC-V CPU (4 Threads), 16GB of RAM, and 128GB of storage.

The developers are using a version of Debian Linux that already supports many of the project’s packages. It is important to note that RISC-V still isn’t a first-class citizen of Linux, with many applications and drivers still being developed and upstreamed.

However, researchers are having fundamental software issues.

“Updating the operating system packages and configurations on all nodes is also challenging due to the maturity of the software,” the researchers said.

The researchers explored using tools such as Devstack and Kolla, which “download specific versions of packages and dependencies, which turned into many compilation issues on RISC-V,” the researchers said.

The RISC-V standards committee is developing a standard server design as a blueprint for makers to create RISC-V servers for web serving, gaming, and databases.

In early August, RISC-V published the latest version of a server standard for hardware companies to build barebones servers based on the ISA.

“The RISC-V server platform is defined as the collection of SoC hardware, platform firmware, boot/runtime services, and security services,” says a PDF document defining the platform.

The platform has a central layer that includes modules for boot, firmware, and security to protect against break-ins from hackers. The server platform supports the CXL and PCIe 6.0 interfaces.

The central layer branches into the operating system and hypervisor layers, orchestrating the software and virtual machines. Another branch is the baseboard management controller, which manages provisioning, hardware and interfaces on the server.

The server design initiative resembles an effort by the Open Compute Project to build standard server designs for x86 and ARM architectures. Those designs are now being used by the top server makers in databases to scale AI and web workloads.

Separately, a study conducted by the Technical University of Munich investigated Tenstorrent’s Grayskull AI chip, which includes RISC-V processors and 120 Tensix cores. Researcher Moritz Thüning chose the Grayskull e150 AI developer kit –which is available from the company for $799 – and implemented and optimized specific operations used in attention mechanisms.

The Grayskull chip has a 10×12 grid of Tensix cores. Each core has five RISC-V cores, compute engines, a data movement engine, and 1 MB of SRAM. In total, Grayskull has 120MB of SRAM, which is more than 80MB SRAM in Nvidia’s H100 GPU. A network-on-chip has a torus topology for communications between cores.

SRAM allows faster access to data related to the attention mechanism, which allows the model to focus on relevant parts of the input data when producing each part of the output.

The study focused on fused implementation, which includes optimizing specific operations such as matrix multiplication, scaling, and Softmax. Softmax is a critical function that takes the magnitude of preferences related to object classification and turns them into probabilities.

The researcher observed a speedup of 17 times for the fused implementation compared to a CPU implementation with caching. Grayskull has more SRAM than GPUs and parallel processing capabilities for efficient processing.

Grayskull isn’t as fast in overall computational performance as H100, but it can be more cost-efficient in specific computations. Grayskull has 92 and 332 TFLOPs for 16- and 8-bit floats, respectively, compared to 1513 and 3026 TFLOPs for Nvidia’s PCIe version of H100.

But Thüning reminded us that H100 PCIe “is approximately 30x more expensive for the general public.”

“It would be interesting to port the implementation to newer generations (e.g., Tenstorrent Wormhole) and to scale it on multiple cards,” Thüning said.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Argonne’s HPC/AI User Forum Wrap Up

September 11, 2024

As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary technology that even established events focusing on HPC specific Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be natively integrated into four of the world's most advanced qu Read more…

Computing-Driven Medicine: Sleeping Better with HPC

September 10, 2024

As a senior undergraduate student at Fisk University in Nashville, Tenn., Ifrah Khurram's calculus professor, Dr. Sanjukta Hota, encouraged her to apply for the Sustainable Research Pathways Program (SRP). SRP was create Read more…

LLNL Engineers Harness Machine Learning to Unlock New Possibilities in Lattice Structures

September 9, 2024

Lattice structures, characterized by their complex patterns and hierarchical designs, offer immense potential across various industries, including automotive, aerospace, and biomedical engineering. With their outstand Read more…

NSF-Funded Data Fabric Takes Flight

September 5, 2024

The data fabric has emerged as an enterprise data management pattern for companies that struggle to provide large teams of users with access to well-managed, integrated, and secured data. Now scientists working at univer Read more…

xAI Colossus: The Elon Project

September 5, 2024

Elon Musk's xAI cluster, named Colossus (possibly after the 1970 movie about a massive computer that does not end well), has been brought online. Musk recently posted the following on X/Twitter: "This weekend, the @xA Read more…

Shutterstock 793611091

Argonne’s HPC/AI User Forum Wrap Up

September 11, 2024

As fans of this publication will already know, AI is everywhere. We hear about it in the news, at work, and in our daily lives. It’s such a revolutionary tech Read more…

Quantum Software Specialist Q-CTRL Inks Deals with IBM, Rigetti, Oxford, and Diraq

September 10, 2024

Q-CTRL, the Australia-based start-up focusing on quantum infrastructure software, today announced that its performance-management software, Fire Opal, will be n Read more…

NSF-Funded Data Fabric Takes Flight

September 5, 2024

The data fabric has emerged as an enterprise data management pattern for companies that struggle to provide large teams of users with access to well-managed, in Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Shutterstock 1897494979

What’s New with Chapel? Nine Questions for the Development Team

September 4, 2024

HPC news headlines often highlight the latest hardware speeds and feeds. While advances on the hardware front are important, improving the ability to write soft Read more…

Critics Slam Government on Compute Speeds in Regulations

September 3, 2024

Critics are accusing the U.S. and state governments of overreaching by including limits on compute speeds in regulations and laws, which they claim will limit i Read more…

Shutterstock 1622080153

AWS Perfects Cloud Service for Supercomputing Customers

August 29, 2024

Amazon's AWS believes it has finally created a cloud service that will break through with HPC and supercomputing customers. The cloud provider a Read more…

HPC Debrief: James Walker CEO of NANO Nuclear Energy on Powering Datacenters

August 27, 2024

Welcome to The HPC Debrief where we interview industry leaders that are shaping the future of HPC. As the growth of AI continues, finding power for data centers Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Leading Solution Providers

Contributors

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Department of Justice Begins Antitrust Probe into Nvidia

August 9, 2024

After months of skyrocketing stock prices and unhinged optimism, Nvidia has run into a few snags – a  design flaw in one of its new chips and an antitrust pr Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – Read more…

xAI Colossus: The Elon Project

September 5, 2024

Elon Musk's xAI cluster, named Colossus (possibly after the 1970 movie about a massive computer that does not end well), has been brought online. Musk recently Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire