Math and Science Behind Blockbuster Movies

By Nicole Hemsoth

March 2, 2007

On Feb. 19 at the annual meeting of the American Association for the Advancement of Science in San Francisco, movie lovers got a behind-the-scenes glimpse at the physics-based simulations that breathe life into fantasy.

“It is an exhaustive task to prescribe the motion of every degree of freedom in a piece of clothing or a crashing wave,” says Ron Fedkiw, an assistant professor of computer science at Stanford who spoke about computations used to make solids and fluids more realistic in feature films. “Since these motions are governed by physical processes, it can be difficult to make these phenomena appear natural. Thus, physically based simulation has become quite popular in the special effects industry. The same class of tools useful for computational fluid dynamics is also useful for sinking a ship on the big screen.”

Fedkiw's talk was part of a symposium titled “Blockbuster Science: Math and Science Behind Movies and Entertainment,” which brings together leaders from industry and academia. The other speakers were Tony DeRose of Pixar in Emeryville, Calif., and Doug Roble of Digital Domain in Venice, Calif. Math Professor Tony Chan of the University of California-Los Angeles moderated the symposium.

Science at the Oscars

This year, two of the three movies nominated for a visual effects Oscar — Poseidon and Pirates of the Caribbean: Dead Man's Chest (which won the Oscar in this category), both made by Industrial Light & Magic (ILM) — required heavy numerical simulation, says Fedkiw, who has consulted for ILM for six years. Most recently, the PhysBAM (for Physics Based Modeling) core math engine he developed helped to create realistic water in Poseidon and Davy Jones' tentacles in Dead Man's Chest.

Computer graphics (CG) experts used to have to make a Catch-22 decision. They could run inferior algorithms on many processors or run the best algorithm on only one processor. The problem is that many algorithms do not scale well to larger numbers of processors. But about a year and a half ago Fedkiw figured out how to run a star algorithm on many processors, resulting in special effects unprecedented in their realism.

He designs new algorithms for diverse applications including computational fluid dynamics and solid mechanics, computer graphics, computer vision and computational biomechanics. The algorithms may rotate objects, simulate textures, generate reflections or mimic collisions. Or they may mathematically stitch together slices of a falling water drop, rising smoke wisp or flickering flame to weave realism into CG images.

Fedkiw received screen credits for his work on Poseidon, on Terminator 3: Rise of the Machines for the liquid terminator and the nuclear explosions and on Star Wars: Episode III — Revenge of the Sith for explosions in space battle scenes. “My first love is computational physics and most of my career has been dedicated to that,” says Fedkiw, who has published more than 75 research papers in computational physics, computer graphics and vision, as well as a book on level set methods with UCLA's Stanley Osher. Recently he has grown interested in applying computational physics to virtual surgery and modeling of the human face.

Fedkiw is the recipient of a National Academy of Sciences award for innovations in the modeling and numerical simulation of flows and pioneering contributions to physically based computer graphics. He also received a David and Lucile Packard Foundation fellowship for simulations of humans and a Presidential Early Career Award for Scientists and Engineers, the nation's highest honor for professionals at the outset of their independent research careers.

Going Hollywood

Research universities like Stanford play big roles in training the next generation of CG specialists and developing the science and technology that gets applied in movies in innovative ways.

“The simulation of gases, liquids and combustion for scientific reasons quickly translates into the ability to make animations of smoke, water and fire,” Fedkiw says. “Similar statements hold for soft biological tissues, muscles, fractures and other solid material problems. Once the scientific numerical simulations are worked out, interesting animations can be made shortly thereafter.”

Most of Fedkiw's students double-major in math and computer science. “Graphics itself is a bit less important, and many of them don't take their first graphics class until their junior or senior year of college,” Fedkiw says. “I started [learning computer graphics] rather late, working in pure mathematics until I was 23 years old, and then switching to applied mathematics after that. I didn't know anything about computer graphics until 1998. And although I did work on engineering-related problems, I didn't do any work in computer science until I started working with a company in 1998 to learn more about graphics.”

Fedkiw earned his doctorate in applied mathematics from UCLA in 1996 and did postdoctoral work at UCLA in mathematics and at Caltech in aeronautics before joining Stanford's Computer Science Department in 2000. He wrote his first two papers for the 2001 SIGGRAPH (short for Special Interest Group for Computer Graphics), an annual CG conference convened by the Association for Computing Machinery (ACM). In 2005, ACM SIGGRAPH honored him with its Significant New Researcher Award for contributions to the computer graphics community.

Getting research experience is important for anyone applying to Stanford's computer science doctoral program. “Connecting with a research group is quite important to do in addition to taking classes,” Fedkiw says. He and his students have worked closely with ILM, Pixar, Intel, Honda and Sony Imageworks. “This collaboration with industry is a two-way street and has produced a number of academic papers — as well as some screen credits,” he says. “Both the companies and group at Stanford think of this as a highly synergistic relationship.”

Fedkiw's favorite movie employing CG is Revenge of the Sith. “When I watched the first [Star Wars movie] at 9 years old, I never dreamed that I'd eventually be helping to make the last one.”

—–

Source: Stanford University

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Discovering Alternative Solar Panel Materials with Supercomputing

May 23, 2020

Solar power is quickly growing in the world’s energy mix, but silicon – a crucial material in the construction of photovoltaic solar panels – remains expensive, hindering solar’s expansion and competitiveness wit Read more…

By Oliver Peckham

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia said revenues for the period ended April 26 were up 39 perc Read more…

By Doug Black

TACC Supercomputers Delve into COVID-19’s Spike Protein

May 22, 2020

If you’ve been following COVID-19 research, by now, you’ve probably heard of the spike protein (or S-protein). The spike protein – which gives COVID-19 its namesake crown-like shape – is the virus’ crowbar into Read more…

By Oliver Peckham

Using HPC, Researchers Discover How Easily Hurricanes Form

May 21, 2020

Hurricane formation has long remained shrouded in mystery, with meteorologists unable to discern exactly what forces cause the devastating storms (also known as tropical cyclones) to materialize. Now, researchers at Flor Read more…

By Oliver Peckham

Lab Behind the Record-Setting GPU ‘Cloud Burst’ Joins [email protected]’s COVID-19 Effort

May 20, 2020

Last November, the Wisconsin IceCube Particle Astrophysics Center (WIPAC) set out to break some records with a moonshot project: over a couple of hours, they bought time on as many cloud GPUS as they could – 51,000 – Read more…

By Staff report

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to review the state of HPC use in life sciences. This is somethin Read more…

By John Russell

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to re Read more…

By John Russell

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

AMD Epyc Rome Picked for New Nvidia DGX, but HGX Preserves Intel Option

May 19, 2020

AMD continues to make inroads into the datacenter with its second-generation Epyc "Rome" processor, which last week scored a win with Nvidia's announcement that Read more…

By Tiffany Trader

Hacking Streak Forces European Supercomputers Offline in Midst of COVID-19 Research Effort

May 18, 2020

This week, a number of European supercomputers discovered intrusive malware hosted on their systems. Now, in the midst of a massive supercomputing research effo Read more…

By Oliver Peckham

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Wafer-Scale Engine AI Supercomputer Is Fighting COVID-19

May 13, 2020

Seemingly every supercomputer in the world is allied in the fight against the coronavirus pandemic – but not many of them are fresh out of the box. Cerebras S Read more…

By Oliver Peckham

Startup MemVerge on Memory-centric Mission

May 12, 2020

Memory situated at the center of the computing universe, replacing processing, has long been envisioned as instrumental to radically improved datacenter systems Read more…

By Doug Black

In Australia, HPC Illuminates the Early Universe

May 11, 2020

Many billions of years ago, the universe was a swirling pool of gas. Unraveling the story of how we got from there to here isn’t an easy task, with many simul Read more…

By Oliver Peckham

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This