Nvidia Wins NeurIPS Awards for Research on Generative AI, Generalist AI Agents

November 29, 2022

Nov. 29, 2022 — Two NVIDIA Research papers — one exploring diffusion-based generative AI models and another on training generalist AI agents — have been honored with NeurIPS 2022 Awards for their contributions to the field of AI and machine learning.

These are among more than 60+ talks, posters and workshops with NVIDIA authors being presented at the NeurIPs conference, taking place this week in New Orleans and next week online.

Synthetic data generation — for images, text or video — is a key theme across several of the NVIDIA-authored papers. Other topics include reinforcement learning, data collection and augmentation, weather models and federated learning.

“AI is an incredibly important technology, and NVIDIA is making fast progress across the gamut — from generative AI to autonomous AI agents,” said Jan Kautz, vice president of learning and perception research at NVIDIA. “In generative AI, we are not only advancing our theoretical understanding of the underlying models, but are also making practical contributions that will reduce the effort of creating realistic virtual worlds and simulations.”

Reimagining the Design of Diffusion-Based Generative Models 

Diffusion-based models have emerged as a groundbreaking technique for generative AI. NVIDIA researchers won an Outstanding Main Track Paper award for work that analyzes the design of diffusion models, proposing improvements that can dramatically improve the efficiency and quality of these models.

The paper breaks down the components of a diffusion model into a modular design, helping developers identify processes that can be adjusted to improve the performance of the entire model. The researchers show that their modifications enable record scores on a metric that assesses the quality of AI-generated images.

Training Generalist AI Agents in a Minecraft-Based Simulation Suite

While researchers have long trained autonomous AI agents in video-game environments such as StarcraftDota and Go, these agents are usually specialists in only a few tasks. So NVIDIA researchers turned to Minecraft, the world’s most popular game, to develop a scalable training framework for a generalist agent — one that can successfully execute a wide variety of open-ended tasks.

Dubbed MineDojo, the framework enables an AI agent to learn Minecraft’s flexible gameplay using a massive online database of more than 7,000 wiki pages, millions of Reddit threads and 300,000 hours of recorded gameplay (shown in image at top). The project won an Outstanding Datasets and Benchmarks Paper Award from the NeurIPS committee.

As a proof of concept, the researchers behind MineDojo created a large-scale foundation model, called MineCLIP, that learned to associate YouTube footage of Minecraft gameplay with the video’s transcript, in which the player typically narrates the onscreen action. Using MineCLIP, the team was able to train a reinforcement learning agent capable of performing several tasks in Minecraft without human intervention.

Creating Complex 3D Shapes to Populate Virtual Worlds

Also at NeurIPS is GET3D, a generative AI model that instantly synthesizes 3D shapes based on the category of 2D images it’s trained on, such as buildings, cars or animals. The AI-generated objects have high-fidelity textures and complex geometric details — and are created in a triangle mesh format used in popular graphics software applications. This makes it easy for users to import the shapes into 3D renderers and game engines for further editing.

Named for its ability to Generate Explicit Textured 3D meshes, GET3D was trained on NVIDIA A100 Tensor Core GPUs using around 1 million 2D images of 3D shapes captured from different camera angles. The model can generate around 20 objects a second when running inference on a single NVIDIA GPU.

The AI-generated objects could be used to populate 3D representations of buildings, outdoor spaces or entire cities — digital spaces designed for industries such as gaming, robotics, architecture and social media.

Improving Inverse Rendering Pipelines With Control Over Materials, Lighting

At the most recent CVPR conference, held in New Orleans in June, NVIDIA Research introduced 3D MoMa, an inverse rendering method that enables developers to create 3D objects composed of three distinct parts: a 3D mesh model, materials overlaid on the model, and lighting.

The team has since achieved significant advancements in untangling materials and lighting from the 3D objects — which in turn improves creators’ abilities to edit the AI-generated shapes by swapping materials or adjusting lighting as the object moves around a scene.

Enhancing Factual Accuracy of Language Models’ Generated Text 

Another accepted paper at NeurIPS examines a key challenge with pretrained language models: the factual accuracy of AI-generated text.

Language models trained for open-ended text generation often come up with text that includes nonfactual information, since the AI is simply making correlations between words to predict what comes next in a sentence. In the paper, NVIDIA researchers propose techniques to address this limitation, which is necessary before such models can be deployed for real-world applications.

The researchers built the first automatic benchmark to measure the factual accuracy of language models for open-ended text generation, and found that bigger language models with billions of parameters were more factual than smaller ones. The team proposed a new technique, factuality-enhanced training, along with a novel sampling algorithm that together help train language models to generate accurate text — and demonstrated a reduction in the rate of factual errors from 33% to around 15%.

There are more than 300 NVIDIA researchers around the globe, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. Learn more about NVIDIA Research and view NVIDIA’s full list of accepted papers at NeurIPS.


Source: Isha Salian, Nvidia

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Touts Strong Results on Financial Services Inference Benchmark

February 3, 2023

The next-gen Hopper family may be on its way, but that isn’t stopping Nvidia’s popular A100 GPU from leading another benchmark on its way out. This time, it’s the STAC-ML inference benchmark, produced by the Securi Read more…

Quantum Computing Firm Rigetti Faces Delisting

February 3, 2023

Quantum computing companies are seeing their market caps crumble as investors patiently await out the winner-take-all approach to technology development. Quantum computing firms such as Rigetti Computing, IonQ and D-Wave went public through mergers with blank-check companies in the last two years, with valuations at the time of well over $1 billion. Now the market capitalization of these companies are less than half... Read more…

US and India Strengthen HPC, Quantum Ties Amid Tech Tension with China

February 2, 2023

Last May, the United States and India announced the “Initiative on Critical and Emerging Technology” (iCET), aimed at expanding the countries’ partnerships in strategic technologies and defense industries across th Read more…

Pittsburgh Supercomputing Enables Transparent Medicare Outcome AI

February 2, 2023

Medical applications of AI are replete with promise, but stymied by opacity: with lives on the line, concerns over AI models’ often-inscrutable reasoning – and as a result, possible biases embedded in those models Read more…

Europe’s LUMI Supercomputer Has Officially Been Accepted

February 1, 2023

“LUMI is officially here!” proclaimed the headline of a blog post written by Pekka Manninen, director of science and technology for CSC, Finland’s state-owned IT center. The EuroHPC-organized supercomputer’s most Read more…

AWS Solution Channel

Shutterstock 2069893598

Cost-effective and accurate genomics analysis with Sentieon on AWS

This blog post was contributed by Don Freed, Senior Bioinformatics Scientist, and Brendan Gallagher, Head of Business Development at Sentieon; and Olivia Choudhury, PhD, Senior Partner Solutions Architect, Sujaya Srinivasan, Genomics Solutions Architect, and Aniket Deshpande, Senior Specialist, HPC HCLS at AWS. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1453953692

Microsoft and NVIDIA Experts Talk AI Infrastructure

As AI emerges as a crucial tool in so many sectors, it’s clear that the need for optimized AI infrastructure is growing. Going beyond just GPU-based clusters, cloud infrastructure that provides low-latency, high-bandwidth interconnects and high-performance storage can help organizations handle AI workloads more efficiently and produce faster results. Read more…

Intel’s Gaudi3 AI Chip Survives Axe, Successor May Combine with GPUs

February 1, 2023

Intel's paring projects and products amid financial struggles, but AI products are taking on a major role as the company tweaks its chip roadmap to account for more computing specifically targeted at artificial intellige Read more…

Quantum Computing Firm Rigetti Faces Delisting

February 3, 2023

Quantum computing companies are seeing their market caps crumble as investors patiently await out the winner-take-all approach to technology development. Quantum computing firms such as Rigetti Computing, IonQ and D-Wave went public through mergers with blank-check companies in the last two years, with valuations at the time of well over $1 billion. Now the market capitalization of these companies are less than half... Read more…

US and India Strengthen HPC, Quantum Ties Amid Tech Tension with China

February 2, 2023

Last May, the United States and India announced the “Initiative on Critical and Emerging Technology” (iCET), aimed at expanding the countries’ partnership Read more…

Intel’s Gaudi3 AI Chip Survives Axe, Successor May Combine with GPUs

February 1, 2023

Intel's paring projects and products amid financial struggles, but AI products are taking on a major role as the company tweaks its chip roadmap to account for Read more…

Roadmap for Building a US National AI Research Resource Released

January 31, 2023

Last week the National AI Research Resource (NAIRR) Task Force released its final report and roadmap for building a national AI infrastructure to include comput Read more…

PFAS Regulations, 3M Exit to Impact Two-Phase Cooling in HPC

January 27, 2023

Per- and polyfluoroalkyl substances (PFAS), known as “forever chemicals,” pose a number of health risks to humans, with more suspected but not yet confirmed Read more…

Multiverse, Pasqal, and Crédit Agricole Tout Progress Using Quantum Computing in FS

January 26, 2023

Europe-based quantum computing pioneers Multiverse Computing and Pasqal, and global bank Crédit Agricole CIB today announced successful conclusion of a 1.5-yea Read more…

Critics Don’t Want Politicians Deciding the Future of Semiconductors

January 26, 2023

The future of the semiconductor industry was partially being decided last week by a mix of politicians, policy hawks and chip industry executives jockeying for Read more…

Riken Plans ‘Virtual Fugaku’ on AWS

January 26, 2023

The development of a national flagship supercomputer aimed at exascale computing continues to be a heated competition, especially in the United States, the Euro Read more…

Leading Solution Providers

Contributors

SC22 Booth Videos

AMD @ SC22
Altair @ SC22
AWS @ SC22
Ayar Labs @ SC22
CoolIT @ SC22
Cornelis Networks @ SC22
DDN @ SC22
Dell Technologies @ SC22
HPE @ SC22
Intel @ SC22
Intelligent Light @ SC22
Lancium @ SC22
Lenovo @ SC22
Microsoft and NVIDIA @ SC22
One Stop Systems @ SC22
Penguin Solutions @ SC22
QCT @ SC22
Supermicro @ SC22
Tuxera @ SC22
Tyan Computer @ SC22
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire