DONOSTIA-SAN SEBASTIÁN, Spain, June 27, 2024 — Multiverse Computing, a global leader in AI and quantum software solutions, won funding and time on a supercomputer to build a large language model (LLM) for the Large AI Grand Challenge by AI-BOOST, an open challenge program designed to be a benchmark for the European artificial intelligence (AI) community.
AI-BOOST, funded by the European Commission, awarded Multiverse Computing 800,000 hours of compute time on a supercomputer to build and train an LLM from scratch using quantum and quantum-inspired technology.
“Winning this challenge recognizes the specific strength of Multiverse Computing in building faster and more energy-efficient LLMs by using quantum and quantum-inspired technology, today,” said Enrique Lizaso Olmos, co-founder and CEO. “We are proud of the confidence EU leaders have in our expertise and ability to create a new class of LLMs with faster training, smaller datasets and lower operating costs.”
The teams awarded in the Large AI Grand Challenge will have 12 months to develop a large-scale AI model with a minimum of 30 billion parameters and train it on one of Europe’s supercomputers. This award also represents a significative step-up of the efforts of the European Commission to apply quantum and quantum-inspired technology to the AI field, and in particular to LLMs. The Quantum AI field is getting more and more attention as it is perceived as a potential solution to the increasing demand for larger computational and energy resources in AI.
Multiverse Computing’s new CompactifAI software uses quantum-inspired techniques to make LLMs 95% smaller and 50% cheaper to train and run, while preserving high levels of accuracy. New research from the Multiverse Computing science team compared CompactifAI’s performance to commonly used compression techniques and found that CompactifAI removes unnecessary elements of the model while still producing all the same fine detailed performance.
As the paper explains, CompactifAI reduces the number of parameters in a model by 70 to 80%, and cuts memory requirements by 95% or more. This significant reduction speeds up both the training process and inference time by at least half. All these changes result in an accuracy drop of only 2%. At a time when regular LLMs are costing $100M or more to train, these savings show a clear path to make the LLMs both cheaper and more environmentally friendly.
“Our recent benchmarking paper shows that our software significantly reduced the size of a large language model with 7 billion parameters,” said Román Orús, Chief Scientific Officer of Multiverse Computing. “We’re excited to rise to the challenge of using our quantum-inspired techniques as we develop a model from scratch. We also have detailed plans to use near-term quantum computers to accelerate the LLM training process even more.”
The winners of the Large AI Grand Challenge will also have the opportunity to work with the European Commission’s AI and Robotics Group and the European High Performance Computing Joint Undertaking.
Challenge applicants submitted a detailed project plan for the development of a large-scale AI model from scratch, a justification for the use of HPC, and a demonstration of the team’s expertise in training foundation models using HPC systems.
The overall objective of AI-BOOST is to attract talent from all over the EU and Associated Countries to drive scientific progress in AI. The project will foster collaboration between the key stakeholders in the AI community to define attractive AI challenges with the potential to lead trustworthy and human-centric real-world solutions.
About Multiverse Computing
Multiverse Computing is a leading AI and quantum software platform dedicated to applying quantum and quantum-inspired AI solutions to address complex challenges in finance, energy, manufacturing, logistics, space, life sciences, healthcare and defense, delivering tangible value today.
Source: Multiverse Computing