Math and Science Behind Blockbuster Movies

By Nicole Hemsoth

March 2, 2007

On Feb. 19 at the annual meeting of the American Association for the Advancement of Science in San Francisco, movie lovers got a behind-the-scenes glimpse at the physics-based simulations that breathe life into fantasy.

“It is an exhaustive task to prescribe the motion of every degree of freedom in a piece of clothing or a crashing wave,” says Ron Fedkiw, an assistant professor of computer science at Stanford who spoke about computations used to make solids and fluids more realistic in feature films. “Since these motions are governed by physical processes, it can be difficult to make these phenomena appear natural. Thus, physically based simulation has become quite popular in the special effects industry. The same class of tools useful for computational fluid dynamics is also useful for sinking a ship on the big screen.”

Fedkiw's talk was part of a symposium titled “Blockbuster Science: Math and Science Behind Movies and Entertainment,” which brings together leaders from industry and academia. The other speakers were Tony DeRose of Pixar in Emeryville, Calif., and Doug Roble of Digital Domain in Venice, Calif. Math Professor Tony Chan of the University of California-Los Angeles moderated the symposium.

Science at the Oscars

This year, two of the three movies nominated for a visual effects Oscar — Poseidon and Pirates of the Caribbean: Dead Man's Chest (which won the Oscar in this category), both made by Industrial Light & Magic (ILM) — required heavy numerical simulation, says Fedkiw, who has consulted for ILM for six years. Most recently, the PhysBAM (for Physics Based Modeling) core math engine he developed helped to create realistic water in Poseidon and Davy Jones' tentacles in Dead Man's Chest.

Computer graphics (CG) experts used to have to make a Catch-22 decision. They could run inferior algorithms on many processors or run the best algorithm on only one processor. The problem is that many algorithms do not scale well to larger numbers of processors. But about a year and a half ago Fedkiw figured out how to run a star algorithm on many processors, resulting in special effects unprecedented in their realism.

He designs new algorithms for diverse applications including computational fluid dynamics and solid mechanics, computer graphics, computer vision and computational biomechanics. The algorithms may rotate objects, simulate textures, generate reflections or mimic collisions. Or they may mathematically stitch together slices of a falling water drop, rising smoke wisp or flickering flame to weave realism into CG images.

Fedkiw received screen credits for his work on Poseidon, on Terminator 3: Rise of the Machines for the liquid terminator and the nuclear explosions and on Star Wars: Episode III — Revenge of the Sith for explosions in space battle scenes. “My first love is computational physics and most of my career has been dedicated to that,” says Fedkiw, who has published more than 75 research papers in computational physics, computer graphics and vision, as well as a book on level set methods with UCLA's Stanley Osher. Recently he has grown interested in applying computational physics to virtual surgery and modeling of the human face.

Fedkiw is the recipient of a National Academy of Sciences award for innovations in the modeling and numerical simulation of flows and pioneering contributions to physically based computer graphics. He also received a David and Lucile Packard Foundation fellowship for simulations of humans and a Presidential Early Career Award for Scientists and Engineers, the nation's highest honor for professionals at the outset of their independent research careers.

Going Hollywood

Research universities like Stanford play big roles in training the next generation of CG specialists and developing the science and technology that gets applied in movies in innovative ways.

“The simulation of gases, liquids and combustion for scientific reasons quickly translates into the ability to make animations of smoke, water and fire,” Fedkiw says. “Similar statements hold for soft biological tissues, muscles, fractures and other solid material problems. Once the scientific numerical simulations are worked out, interesting animations can be made shortly thereafter.”

Most of Fedkiw's students double-major in math and computer science. “Graphics itself is a bit less important, and many of them don't take their first graphics class until their junior or senior year of college,” Fedkiw says. “I started [learning computer graphics] rather late, working in pure mathematics until I was 23 years old, and then switching to applied mathematics after that. I didn't know anything about computer graphics until 1998. And although I did work on engineering-related problems, I didn't do any work in computer science until I started working with a company in 1998 to learn more about graphics.”

Fedkiw earned his doctorate in applied mathematics from UCLA in 1996 and did postdoctoral work at UCLA in mathematics and at Caltech in aeronautics before joining Stanford's Computer Science Department in 2000. He wrote his first two papers for the 2001 SIGGRAPH (short for Special Interest Group for Computer Graphics), an annual CG conference convened by the Association for Computing Machinery (ACM). In 2005, ACM SIGGRAPH honored him with its Significant New Researcher Award for contributions to the computer graphics community.

Getting research experience is important for anyone applying to Stanford's computer science doctoral program. “Connecting with a research group is quite important to do in addition to taking classes,” Fedkiw says. He and his students have worked closely with ILM, Pixar, Intel, Honda and Sony Imageworks. “This collaboration with industry is a two-way street and has produced a number of academic papers — as well as some screen credits,” he says. “Both the companies and group at Stanford think of this as a highly synergistic relationship.”

Fedkiw's favorite movie employing CG is Revenge of the Sith. “When I watched the first [Star Wars movie] at 9 years old, I never dreamed that I'd eventually be helping to make the last one.”

—–

Source: Stanford University

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Top 500: Aurora Breaks into Exascale, but Can’t Get to the Frontier of HPC

May 13, 2024

The 63rd installment of the TOP500 list is available today in coordination with the kickoff of ISC 2024 in Hamburg, Germany. Once again, the Frontier system at Oak Ridge National Laboratory in Tennessee, USA, retains its Read more…

Harvard/Google Use AI to Help Produce Astonishing 3D Map of Brain Tissue

May 10, 2024

Although LLMs are getting all the notice lately, AI techniques of many varieties are being infused throughout science. For example, Harvard researchers, Google, and colleagues published a 3D map in Science this week that Read more…

ISC Preview: Focus Will Be on Top500 and HPC Diversity 

May 9, 2024

Last year's Supercomputing 2023 in November had record attendance, but the direction of high-performance computing was a hot topic on the floor. Expect more of that at the upcoming ISC High Performance 2024, which is hap Read more…

Processor Security: Taking the Wong Path

May 9, 2024

More research at UC San Diego revealed yet another side-channel attack on x86_64 processors. The research identified a new vulnerability that allows precise control of conditional branch prediction in modern processors.� Read more…

The Ultimate 2024 Winter Class Round-Up

May 8, 2024

To make navigating easier, we have compiled a collection of all the 2024 Winter Classic News in this single page round-up. Meet The Teams   Introducing Team Lobo This is the other team from University of New Mex Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have become the backbone of devices with an on/off switch. Thes Read more…

Top 500: Aurora Breaks into Exascale, but Can’t Get to the Frontier of HPC

May 13, 2024

The 63rd installment of the TOP500 list is available today in coordination with the kickoff of ISC 2024 in Hamburg, Germany. Once again, the Frontier system at Read more…

ISC Preview: Focus Will Be on Top500 and HPC Diversity 

May 9, 2024

Last year's Supercomputing 2023 in November had record attendance, but the direction of high-performance computing was a hot topic on the floor. Expect more of Read more…

Illinois Considers $20 Billion Quantum Manhattan Project Says Report

May 7, 2024

There are multiple reports that Illinois governor Jay Robert Pritzker is considering a $20 billion Quantum Manhattan-like project for the Chicago area. Accordin Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

How Nvidia Could Use $700M Run.ai Acquisition for AI Consumption

May 6, 2024

Nvidia is touching $2 trillion in market cap purely on the brute force of its GPU sales, and there's room for the company to grow with software. The company hop Read more…

Hyperion To Provide a Peek at Storage, File System Usage with Global Site Survey

May 3, 2024

Curious how the market for distributed file systems, interconnects, and high-end storage is playing out in 2024? Then you might be interested in the market anal Read more…

Qubit Watch: Intel Process, IBM’s Heron, APS March Meeting, PsiQuantum Platform, QED-C on Logistics, FS Comparison

May 1, 2024

Intel has long argued that leveraging its semiconductor manufacturing prowess and use of quantum dot qubits will help Intel emerge as a leader in the race to de Read more…

Stanford HAI AI Index Report: Science and Medicine

April 29, 2024

While AI tools are incredibly useful in a variety of industries, they truly shine when applied to solving problems in scientific and medical discovery. Research Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Leading Solution Providers

Contributors

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

A Big Memory Nvidia GH200 Next to Your Desk: Closer Than You Think

February 22, 2024

Students of the microprocessor may recall that the original 8086/8088 processors did not have floating point units. The motherboard often had an extra socket fo Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire