Cost-effective Fork of GPT-3 Released to Scientists

By Agam Shah

March 28, 2023

Researchers looking to create a foundation for a ChatGPT-style application now have an affordable way to do so.

Cerebras is releasing open-source learning models for researchers with the ingredients necessary to cook up their own ChatGPT-AI applications.

The open-source tools include seven models that form a learning architecture in which researchers can feed their data, train a system, and then can generate results.

The models are based on the GPT-3 large language model, which is the basis for OpenAI’s ChatGPT chatbot, and has up to 13 billion parameters.

“You need a model, and you need data. And you need expertise. And you need computer hardware,” said Andrew Feldman, CEO of Cerebras Systems.

The tools are an effort to give companies an affordable tool to build large language models. For scientific researchers with limited size data sets specific to domains, a model with up to 13 billion parameters would be enough.

The optimized Cerebras-GPT model is not as large as open-source GPT-3, which has 175 billion parameters. The latest OpenAI model, GPT-4, which is closed source and powers Microsoft’s Bing with AI, has significantly more parameters.

Cerebras’ model is more cost-effective because GPT-3 would be an overkill because of its sheer size and hardware required.

OpenAI’s latest GPT-4 – which is for significantly larger data sets – would be expensive at hundreds of thousands of dollars per month for a license, said Karl Freund, principal analyst at Cambrian AI Research.

“There are other models that are available at lower costs that are small, but they are all going to cost you money. This does not cost you anything,” Freund said, adding “whether it’s astronomy or biochemistry or whatever, you want to build a model that is affordable.”

Cerebras is a hardware company primarily known for its AI chips. But the emergence of ChatGPT gave a new lease of life to many AI chipmakers, and software can be an effective showcase of the hardware capabilities, Freund said.

“We’ve trained these models and are making every aspect of that training available to the open-source community and you can run it on CPUs, GPUs and TPUs,” Feldman said, adding that the models are available on ModelZoo, HuggingFace, or through the company’s GitHub repository.

The generative AI landscape is still in its infancy, but trends are emerging. OpenAI’s GPT-3 was open-sourced, but GPT-4 – which is being used by Microsoft for Bing’s AI capabilities – is closed source.

Beyond Microsoft, Google has Lambda and PaLM, while Meta has LLaMa, which is only open for research. By comparison, Cerebras’ model can be used for commercial applications.

“What we have is a situation where increasingly a handful of companies hold the keys and they’re big and it’s not getting more open over time. It is getting more closed,” Feldman said.

Meta’s LLaMa foundation model has up to 65 billion parameters, while Google’s PaLM has 540 billion, but those models are much more expensive considering the hardware requirements. The cost per transaction and response time has become a bigger part of the conversation for Microsoft and Google, which are now competing on AI search.

Customers are increasingly concerned about LLM inference costs. Historically, more capable models required more parameters, which meant larger and more expensive inference computing was necessary. Cerebras’ goal is to make sure implementations are competitive on cost and performance.

“In this release we train models with 20 tokens per parameter, which is compute-optimal, but we are already working with customers to train models with far more tokens per parameter – meaning higher quality model at lower inference costs,” said Richard Kuzma, senior product manager of natural language processing at Cerebras.

Customers are not locked into the pricing of cloud providers, and own their model weights after training with Cerebras. They can take advantage of newly developed inference optimizations and are not locked into relying on an API that isn’t under their control.

Cerebras’ Andromeda AI supercomputer. Credit: Cerebras.

The models were trained on Cerebras’ Andromeda AI supercomputer, which patches together 16 CS-2 systems and a total of 13.5 million AI computing cores.

The Andromeda system delivers in excess of 1 exaflop of AI performance. An x86-based system comprising 284 Epyc 7713 “Milan” CPUs handles preprocessing before the applications go to the  Cerebras’ wafer-sized WSE-2 chips, which have 2.6 trillion transistors.

The WSE-2 chip was announced in April 2021. Cerebras will announce hardware updates in the coming quarter, Feldman said.

Generative AI was a big topic of discussion at last week’s GPU Technology Conference held by Nvidia. ChatGPT has showed the ability to write code, but can also make up stuff, which could be entertaining, but that is not good for science, said Kathleen Fisher, director of the information innovation office at the U.S. Department of Defense’s Defense Advanced Research Projects Agency (DARPA), during a break-out session at GTC.

“There’s a growing sense that it is more likely to hallucinate in parts of the data that are less well covered,” Fisher said.

But that is where things like checkpoints – which is in Cerebras’ models – come in.

“We’re going to see massive movements in addressing that particular issue, based on the science underneath the large language models, large pre-trained models, how they’re based on just statistical predictions of what’s coming next,” Fisher said.

It is healthy practice to distrust AI systems, she said.

“It seems like that capability fundamentally will continue to have a hallucinatory problem, but you could build belts and suspenders or other things on top that might catch the problems and drive the occurrence rate down,” Fisher said.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ASC23: LINPACK Results

May 30, 2023

With ISC23 now in the rearview mirror, let’s get back to the results from the ASC23 Student Cluster Competition. In our last articles, we looked at the competition and applications, plus introduced the teams, now it’ Read more…

At ISC, Sustainable Computing Leaders Discuss HPC’s Energy Crossroads

May 30, 2023

In the wake of SC22 last year, HPCwire wrote that “the conference’s eyes had shifted to carbon emissions and energy intensity” rather than the historical emphasis on flops-per-watt and power usage effectiveness (PU Read more…

Nvidia Launches Spectrum-X Networking Platform for Generative AI

May 29, 2023

Nvidia launched a new Ethernet-based networking platform – the Nvidia Spectrum-X – that targets generative AI workloads. Based on tight coupling of the Nvidia Spectrum-4 Ethernet switch with the Nvidia BlueField-3 D Read more…

Nvidia Announces Four Supercomputers, with Two in Taiwan

May 29, 2023

At the Computex event in Taipei this week, Nvidia announced four new systems equipped with its Grace- and Hopper-generation hardware, including two in Taiwan. Those two are Taiwania 4, powered by Nvidia’s Grace CPU Sup Read more…

Nvidia Announces New ‘1 Exaflops’ AI Supercomputer; Grace-Hopper in ‘Full Production’

May 28, 2023

We in HPC sometimes roll our eyes at the term “AI supercomputer,” but a new system from Nvidia might live up to the moniker: the DGX GH200 AI supercomputer. Announced tonight (mid-day Monday in Taiwan) at Computex in Read more…

AWS Solution Channel

Shutterstock 1493175377

Introducing GPU health checks in AWS ParallelCluster 3.6

GPU failures are relatively rare but when they do occur, they can have severe consequences for HPC and deep learning tasks. For example, they can disrupt long-running simulations and distributed training jobs. Read more…

 

Shutterstock 1415788655

New Thoughts on Leveraging Cloud for Advanced AI

Artificial intelligence (AI) is becoming critical to many operations within companies. As the use and sophistication of AI grow, there is a new focus on the infrastructure requirements to produce results fast and efficiently. Read more…

Closing ISC Keynote by Sterling and Suarez Looks Backward and Forward

May 25, 2023

ISC’s closing keynote this year was given jointly by a pair of distinguished HPC leaders, Thomas Sterling of Indiana University and Estela Suarez of Jülich Supercomputing Centre (JSC). Ostensibly, Sterling tackled the Read more…

At ISC, Sustainable Computing Leaders Discuss HPC’s Energy Crossroads

May 30, 2023

In the wake of SC22 last year, HPCwire wrote that “the conference’s eyes had shifted to carbon emissions and energy intensity” rather than the historical Read more…

Nvidia Announces Four Supercomputers, with Two in Taiwan

May 29, 2023

At the Computex event in Taipei this week, Nvidia announced four new systems equipped with its Grace- and Hopper-generation hardware, including two in Taiwan. T Read more…

Nvidia Announces New ‘1 Exaflops’ AI Supercomputer; Grace-Hopper in ‘Full Production’

May 28, 2023

We in HPC sometimes roll our eyes at the term “AI supercomputer,” but a new system from Nvidia might live up to the moniker: the DGX GH200 AI supercomputer. Read more…

Closing ISC Keynote by Sterling and Suarez Looks Backward and Forward

May 25, 2023

ISC’s closing keynote this year was given jointly by a pair of distinguished HPC leaders, Thomas Sterling of Indiana University and Estela Suarez of Jülich S Read more…

The Grand Challenge of Simulating Nuclear Fusion: An Overview with UKAEA’s Rob Akers

May 25, 2023

As HPC and AI continue to rapidly advance, the alluring vision of nuclear fusion and its endless zero-carbon, low-radioactivity energy is the sparkle in many a Read more…

MareNostrum 5 Hits Speed Bumps; Iconic Chapel to Host Quantum Systems

May 23, 2023

MareNostrum 5, the next-generation supercomputer at the Barcelona Supercomputing Center (BSC) and one of EuroHPC’s flagship pre-exascale systems, has had a di Read more…

ISC Keynote: To Reinvent HPC After Moore’s Law, Follow the Money

May 23, 2023

This year’s International Supercomputing Conference (ISC) kicked off yesterday in Hamburg, Germany, with a keynote from Dan Reed, presidential professor at th Read more…

ISC BOF: Euro Quantum Community Tackles HPC-QC Integration, Broad User Access

May 23, 2023

Europe has clearly jumped into the global race to achieve practical quantum, though perhaps a step later (by a year or two) than the U.S. and China. Impressivel Read more…

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

Leading Solution Providers

Contributors

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

ISC 2023 Booth Videos

Cornelis Networks @ ISC23
Dell Technologies @ ISC23
Intel @ ISC23
Lenovo @ ISC23
ISC23 Playlist
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire