The Hair-Raising Potential of Exascale Animation

November 9, 2017

Nov. 9, 2017 — There is no questioning the power of a full head of shiny, buoyant hair. Not in real life, not in commercials, and, it turns out, not in computer-generated (CG) animation. Just as more expensive brands of shampoos provide volume, luster, and flow to a human head of hair, so too does more expensive computational power provide the waggle of a prince’s mane or raise the hackles of an evil yak.

Hair proves to be one of the most complex assets in animation, as each strand is comprised of near-infinite individual particles, affecting the way every other strand behaves. With the 2016 release of their feature TrollsDreamWorks Animation had an entire ensemble of characters with hair as a primary feature. The studio will raise the bar again with the film sequel slated for 2020.

The history of DreamWorks Animation is, in many ways, the history of technical advances in computing over the last three decades. Those milestones are evidenced by that flow of hair—or lack thereof—the ripple in a dragon’s leathery wing, or the texture and number of environments in any given film.

Exascale computing will push the Media and Entertainment industry beyond today’s technical barriers.

As the development and accessibility of high-performance computers explode beyond current limits, so too will the creative possibilities for the future of CG animation ignite.

Jeff Wike, Chief Technology Officer (CTO) of DreamWorks Animation, has seen many of the company’s innovations come and go, and fully appreciates both the obstacles and the potential of technological advances on his industry.

“Even today, technology limits what our artists can create,” says Wike. “They always want to up the game, and with the massive amount of technology that we throw at these films, the stakes are enormous.”

Along with his duties as CTO, Wike is a member of the U.S. Department of Energy’s Exascale Computing Project (ECP) Industry Council. The advisory council is comprised of an eclectic group of industry leaders reliant on and looking to the future of high-performance computing, now hurtling toward the exascale frontier.

The ability to perform a billion billion operations per second changes the manufacturing and services landscape for many types of industries and, as Wike will tell you, strip away the creative process and those in the animation industry are manufacturers of digital products.

“This is bigger than any one company or any one industry,” he says. “As a member of the ECP’s Industry Council, we share a common interest and goal with companies representing a diverse group of U.S. industries anxiously anticipating the era of exascale computing.”

Such capability could open a speed-of-light gap between DreamWorks’ current 3D animation and the studio’s origins, 23 years ago, as a 2D animation company producing computer-aided hand-drawn images

Growing CG animation

Wike’s role has certainly evolved since he joined DreamWorks in 1997, with the distinctive job title of technical gunslinger, a position in which he served, he says, as part inventor, part MacGyver, and part tech support.

When Chris deFaria joined DreamWorks Animation as president in March 2017, he instantly identified an untapped opportunity that only could be pursued at a studio where storytellers and technology innovators work in close proximity. He created a collaboration between these two areas in which the artists’ infinite imaginations drive cutting edge technology innovations which, in turn, drive the engineers to imagine even bigger. In essence, a perpetual motion machine of innovation and efficiency.

Under this new reign, Wike distills his broader role into three simple goals: make sure employees have what they need, reduce the cost and production time of films, and continue to innovate in those areas that are transformational.

High-Performance Computing Is Key to Innovation

For DreamWorks—and other large industry players like Disney and Pixar—the transformation of the animated landscape is, and has been, driven by innovations in computer software and hardware.

Much of the CG animation industry was built on the backs of what were, in the late 1990s, fairly high-performance graphics-enabled processors. But computer technology advanced so quickly, DreamWorks was challenged to keep up with the latest and greatest.

“Some of the animators had home computers that were faster than what we had at work,” Wike recalls.

By the time Shrek appeared in 2001, after the early successes of DreamWorks’ first fully CG animated feature, Antz, and Pixar’s Toy Story, it was clear to the fledgling industry, and the movie industry as a whole, that CG animation was the next big wave. Audiences, too, already were expecting higher quality, more complexity and greater diversification with each succeeding film.

To meet mounting expectations, the industry needed a computational overhaul to afford them more power and greater consistency. As the early graphics processors faced more competition, the industry banded together to agree on common requirements, such as commodity hardware, open source libraries, and codes. This developed into an approved list that makes it easier for vendors to support.

Today, DreamWorks’ artists are using high-end dual processor, 32-core workstations with network-attached storage and HPE Gen9 servers utilizing 22,000 cores in the company’s data center. That number is expected to nearly double soon, as the company has now ramped up for production of How to Train Your Dragon 3.

It’s still a long way from exascale. It’s still a long way from petascale, for that matter; compared to current petascale computers that can comprise upwards of 750,000 cores. But the industry continues to push the envelope of what’s possible and what is available. Continuous upgrades in hardware, along with retooling and development of software, create ever-more astounding visuals and further prepare the industry for the next massive leap in computing power.

“I’d be naïve to say that we’re ready for exascale, but we’re certainly mindful of it,” says Wike. “That’s one reason we are so interested in what the ECP is doing.  The interaction with the technology stakeholders from a wide variety of industries is invaluable as we try to understand the full implications and benefits of exascale as an innovation driver for our own industry.”

To read more, follow this link: https://www.exascaleproject.org/hair-raising-potential-exascale-animation/


Source: Exascale Computing Project

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Oracle Cloud Now Offers AMD Epyc Compute Instances

October 23, 2018

Even as a press report yesterday declared that Intel has abandoned its current effort to produce a 10nm chip – a report denied by Intel – looming rival AMD and Oracle today announced the availability of the first AMD Epyc processor-based instance on Oracle Cloud Infrastructure. Read more…

By Doug Black

Scripps, Nvidia Tackle AI Tools and Best Practices for Genomics and Health Sensors

October 23, 2018

Nvidia and the Scripps Research Translational Institute today announced a collaboration to develop AI and deep learning best practices, tools and infrastructure to accelerate AI applications using genomic and digital hea Read more…

By John Russell

Automated Optimization Boosts ResNet50 Performance by 1.77x

October 23, 2018

From supercomputers to cell phones, every system and software device in our digital panoply has a growing number of settings that, if not optimized, constrain performance, wasting precious cycles and watts. In the f Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

One Small Step Toward Mars: One Giant Leap for Supercomputing

Since the days of the Space Race between the U.S. and the former Soviet Union, we have continually sought ways to perform experiments in space. Read more…

IBM Accelerated Insights

Energy Matters: Evolving Holistic Approaches to Energy and Power Management in HPC

Energy costs of running clusters has always been a consideration when operating an infrastructure for high-performance computing (HPC).  As clusters become larger in the drive to the next levels of computing performance, energy efficiency has emerged as one of the foremost design goals.  Read more…

South Africa CHPC: Home Grown Dynasty

October 22, 2018

Before the build up to the final event in the 2018 Student Cluster Competition season (the SC18 competition in Dallas), I want to take a moment to write about one of the great inspirational stories of these competitions. Read more…

By Dan Olds

Automated Optimization Boosts ResNet50 Performance by 1.77x

October 23, 2018

From supercomputers to cell phones, every system and software device in our digital panoply has a growing number of settings that, if not optimized, constrain  Read more…

By Tiffany Trader

South Africa CHPC: Home Grown Dynasty

October 22, 2018

Before the build up to the final event in the 2018 Student Cluster Competition season (the SC18 competition in Dallas), I want to take a moment to write about o Read more…

By Dan Olds

Penguin Computing Launches Consultancy for Piecing AI Strategies Together

October 18, 2018

AI stands before the HPC industry as a beacon of great expectations, yet market research repeatedly shows that AI adoption is commonly stuck in the talking phas Read more…

By Tiffany Trader

When Water Quality—Not Quantity—Hinders HPC Cooling

October 18, 2018

Attention has been paid to the sheer quantity of water consumed by supercomputers’ cooling towers – and rightly so, as they can require thousands of gallons per minute to cool. But in the background, another factor can emerge, bottlenecking efficiency and raising costs: water quality. Read more…

By Oliver Peckham

Paper Offers ‘Proof’ of Quantum Advantage on Some Problems

October 18, 2018

Is quantum computing worth all the effort being poured into it or should we just wait for classical computing to catch up? An IBM blog today posed those questio Read more…

By John Russell

Dell EMC to Supply U Michigan’s Great Lakes Cluster

October 16, 2018

The University of Michigan (U-M) today announced Dell EMC is the lead vendor for U-M’s $4.8 million Great Lakes HPC cluster scheduled for deployment in first Read more…

By John Russell

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Nvidia Platform Pushes GPUs into Machine Learning, High Performance Data Analytics

October 10, 2018

GPU leader Nvidia, generally associated with deep learning, autonomous vehicles and other higher-end enterprise and scientific workloads (and gaming, of course) Read more…

By Doug Black

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

Leading Solution Providers

HPC on Wall Street 2018 Booth Video Tours Playlist

Arista

Dell EMC

IBM

Intel

RStor

VMWare

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Aerodynamic Simulation Reveals Best Position in a Peloton of Cyclists

July 5, 2018

Eindhoven University of Technology (TU/e) and KU Leuven research group conducts the largest numerical simulation ever done in the sport industry and cycling discipline. The goal was to understand the aerodynamic interactions in the peloton, i.e., the main pack of cyclists in a race. Read more…

No Go for GloFo at 7nm; and the Fujitsu A64FX post-K CPU

September 5, 2018

It’s been a news worthy couple of weeks in the semiconductor and HPC industry. There were several HPC relevant disclosures at Hot Chips 2018 to whet appetites Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This