Larry Smarr: The Future of Distributed Computing Is Here

By Alex Woodie

August 25, 2020

Larry Smarr may have stepped back from full-time work in the Computer Science and Engineering Department at the University of California, San Diego, but that doesn’t mean he’s slowing down. In fact, since his schedule has opened up, he’s now free to do other things, such as evangelizing the benefits of distributed computing for local startup Kazuhm. Datanami recently caught up with Smarr to get his take on how new technologies like Kubernetes and 5G are changing the distributed computing game.

Smarr, who received his PhD. in physics from the University of Texas at Austin in 1975, has had the sort of academic career that most computer scientists can only dream of. In 1983, he famously submitted a short 10-page report to the National Science Foundation called “A Center for Scientific and Engineering Supercomputing.” Better known as the Black Proposal (for the color of the cover), it was the first unsolicited proposal accepted by the NSF, and led to the foundation of five supercomputer centers, at Cornell, Illinois, Princeton, San Diego, and Pittsburgh.

As the founding director of the National Center for Supercomputing Applications (NCSA) at the University of Illinois, Champaign-Urbana, Smarr played a role in ushering in the modern Web with the development of Mosaic, a Web browser released in 1993 that built on the HTTP and HTML protocols that Tim Berners-Lee created three years earlier. Smarr moved to California in 2000 and helped found the California Institute for Telecommunications and Information Technology (Calit2) at UCSD and then UC Irvine.

In June, Smarr stepped back from full-time duty at UCSD and serving as the director of Calit2 and took the title Distinguished Professor Emeritus. As the principal investigator on three large computing research proposals, however, Smarr has not entirely divorced himself from UCSD. In fact, the school has called him back to “active duty” to work on these projects (so long as it doesn’t consume in excess of 43% of the time, per the bureaucratic rule book).

Freed from the day-to-day requirements of his former job, Smarr signed up to be an evangelist for distributed computing with a local San Diego outfit called Kazhum. While he has no official position with the Analytic Ventures-funded startup, he clearly sees potential in the company, which develops a platform that utilizes Docker containers and Kubernetes to enable customers to distribute computing workloads to any device, including smaller devices on the IoT.

According to Smarr, the computing world is on the cusp of major changes, even if it doesn’t necessarily feel that way to many people.

“When these things come–and they come on in exponentials–people begin to hear the words, but they don’t get there’s going to be major transformation,” Smarr says. “Go back to file sharing with Napster. All the executives at the record companies were sending out lawyers to old ladies and the kids. They didn’t get that the world was going to completely change and they were going to become extinct in about three years. Or Blockbuster. Or Netflix.

“I’ve lived through so many of these transformations, and I just think the day of ubiquitous distributed computing is finally here,” he continues. “It isn’t like it hasn’t been here for decades. That’s what I, of course, have been doing for decades. But it’s getting to the point where it’s actually becoming mainstream.”

While many computing advances emerged from academia in decades past, we’re now seeing more technological breakthroughs from companies in the private sector, Smarr points out. Google, for instance, has been party to many of the advances since 2000, including the Google File System, which Doug Cutting and Mike Cafaraella used as the inspiration for the Hadoop Distributed File System; Big Table, a NoSQL database built atop the Google File System; and Spanner, a relational database built atop Big Table.

Smarr identified Kubernetes–a resource scheduler for containers created by Google to manage its internal computing resource and released as an open source project in 2014–as an especially disruptive technology, and one that will help to usher in the next generation of distributed computing.

“To me, the containers are just an extraordinary advance because what you always needed in a distributed system was a mobile way for your software to run around in distributed system, and then execute when it found a host that it could sit on,” Smarr explains. “In other words, you’re not doing RPC [remote procedure calls]. You’re not doing remote log-in on that computer to be able to use it, which is what we had to do forever. You let the software worry about that.”

For decades, companies have needed a system that could identify all the computing resources in the organization. But it’s only been in the past few years, with Docker and Kubernetes, such systems have been available and practical to use, according to Smarr.

“I think it’s the biggest change in distributed computing since Mosaic, and more importantly since Tim Berners-Lee and CERN wrote down the HTML and HTTP protocols,” Smarr says. “That unleashed an enormous capability that we didn’t know was there. Now 2 billion people per day update their Facebook page.”

Many companies that want to move their applications to the cloud have had to “containerize” their software–i.e. adapt it to run within a container (usually Docker). Those that have done that work have benefited from the increased flexibility that cloud computing brings (although it’s not necessarily cheaper as Smarr points out).

The cloud is a homogenous computing environment, with Linux on X86 being the norm, even for Microsoft Azure. But with tools like Kazuhm, which works with heterogenous devices spanning Linux, Windows, Mac, VMware, and other operating environments for servers, desktops, laptops, smart phones, tablets and IoT devices, companies now have a way to manage their entire IT infrastructure using containers and Kubernetes.

“It’s not obvious to me that Kubernetes is an appropriate way to orchestrate containers in a normal enterprise. It’s a wonderful thing if you go, say, between the clouds or anything else,” Smarr says. “But I’m also sort of struck by the number of folks who haven’t yet understood that containers change everything and they’re still trying to figure out, ‘You mean I’ve got to containerize? What the hell does that mean?’

“We’re on that part of the learning curve where there’s still an awful lot of very smart, savvy IT people who haven’t made the connection,” he continues. “One of the things that really helps people is you have something like Kazuhm where it’s a tool that you can go into your existing enterprise– you don’t have to change any of your stuff–and it will actually map out that enterprise that you own but you maybe don’t know.”

Smarr says we’re on the cusp of another breakthrough, around connected devices and the IoT. The rapid increase in the capabilities of system-on-a-chip designs are enabling companies to place more powerful devices in more locations in the real world. As the 5G rollout brings fast network access with low latencies, the potential to leverage all that previously isolated computing power will open up new business opportunities.

“One of the things that we think are not on people’s radar yet is just how rapidly IoT processors are gaining in capacity in speed and complexity,” Smarr says. “What I see coming is this partnership between the cloud and an exponentially larger set of machine intelligent sensors on the edge of the net.”

With Calit2, Smarr was involved with WIFIRE, a project that used remote sensors to monitor for wildfires in San Diego County. The project includes a high-speed wireless grid that connects 60 backcountry stations, where optical sensors use machine learning algorithms trained on large neural nets to identify smoke plumes. This is an example of how distributed computing applications will be deployed in the future, he says.

“You’re doing a bunch of training on a neural net for what does a smoke plume look like. You don’t want to do that on the camera. You want to do that on the cloud with a zillion instances of images, and then you want to deploy that trained, weighted net into a low-energy, fast device on the edge,” Smarr says.

“Well, if you actually look at those devices, how they’re executing, many of them are becoming quite powerful,” he continues. “Right now we think of the server room as the heavy weight IT. And then we have all these hundreds or thousands of laptops. That’s lightweight. Now imagine you’ve got tens to hundreds of thousands of IoT devices. That’s the next layer coming.”

With a wide deployment of 5G antennas and many connected devices with increasingly fast processors and useful memory footprints, companies will essentially have their own private cloud at their disposal. With Kubernetes and containers, and software like Kazuhm’s handling the low-level technical details of distribution of workloads and interrupts, companies will be equipped to use that private cloud to do useful work on the massive amount of data they’re collecting.

We’re in the midst of “an invisible rising tide of new computing parallelism,” Smarr says. “It’s like somehow we did a phase change and dissolved all the barriers between all of our computing devices with containers.”

Smarr has been on the forefront of these kind of phase changes before. He points out that, when he helped start Calit2 in 2000, the group’s byline was “The Internet moving to the physical world.” That was before Wi-Fi was widely available, he points out, and three years before users started accessing the Internet on their cell phones.

“It’s kind of scary to live long enough to see your prophecies coming true,” Smarr says. “I do thank the Lord that that I’ve lived this long so that I can see some of this happening. But it was inevitable. And as it happens, as you get these hugely disruptive things happening, whether it’s the Internet or the Web or social media, people are still stuck in the older world. As William Gibson said, the future is already here. It’s just unevenly distributed.”

This story originally appeared on sister publication Datanami.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NIST/Xanadu Researchers Report Photonic Quantum Computing Advance

March 3, 2021

Researchers from the National Institute of Standards and Technology (NIST) and Xanadu, a young Canada-based quantum computing company, have reported developing a full-stack, photonic quantum computer able to carry out th Read more…

By John Russell

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and even to this day, the largest climate models are heavily con Read more…

By Oliver Peckham

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2020) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective immediately. Hotard replaces long-time Cray exec Pete Ungaro Read more…

By Tiffany Trader

ORNL’s Jeffrey Vetter on How IRIS Runtime will Help Deal with Extreme Heterogeneity

March 2, 2021

Jeffery Vetter is a familiar figure in HPC. Last year he became one of the new section heads in a reorganization at Oak Ridge National Laboratory. He had been founding director of ORNL's Future Technologies Group which i Read more…

By John Russell

HPC Career Notes: March 2021 Edition

March 1, 2021

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ Read more…

By Mariana Iriarte

AWS Solution Channel

Moderna Accelerates COVID-19 Vaccine Development on AWS

Marcello Damiani, Chief Digital and Operational Excellence Officer at Moderna, joins Todd Weatherby, Vice President of AWS Professional Services Worldwide, for a discussion on developing Moderna’s COVID-19 vaccine, scaling systems to enable global distribution, and leveraging cloud technologies to accelerate processes. Read more…

Supercomputers Enable First Holistic Model of SARS-CoV-2, Showing Spike Proteins Move in Tandem

February 28, 2021

Most models of SARS-CoV-2, the coronavirus that causes COVID-19, hone in on key features of the virus: for instance, the spike protein. Some of this is attributable to the relative importance of those features, but most Read more…

By Oliver Peckham

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

By Oliver Peckham

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2020) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

By Tiffany Trader

ORNL’s Jeffrey Vetter on How IRIS Runtime will Help Deal with Extreme Heterogeneity

March 2, 2021

Jeffery Vetter is a familiar figure in HPC. Last year he became one of the new section heads in a reorganization at Oak Ridge National Laboratory. He had been f Read more…

By John Russell

HPC Career Notes: March 2021 Edition

March 1, 2021

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it Read more…

By Mariana Iriarte

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

By Oliver Peckham

Japan to Debut Integrated Fujitsu HPC/AI Supercomputer This Spring

February 25, 2021

The integrated Fujitsu HPC/AI Supercomputer, Wisteria, is coming to Japan this spring. The University of Tokyo is preparing to deploy a heterogeneous computing Read more…

By Tiffany Trader

Xilinx Launches Alveo SN1000 SmartNIC

February 24, 2021

FPGA vendor Xilinx has debuted its latest SmartNIC model, the Alveo SN1000, with integrated “composability” features that allow enterprise users to add their own custom networking functions to supplement its built-in networking. By providing deep flexibility... Read more…

By Todd R. Weiss

ASF Keynotes Showcase How HPC and Big Data Have Pervaded the Pandemic

February 24, 2021

Last Thursday, a range of experts joined the Advanced Scale Forum (ASF) in a rapid-fire roundtable to discuss how advanced technologies have transformed the way humanity responded to the COVID-19 pandemic in indelible ways. The roundtable, held near the one-year mark of the first... Read more…

By Oliver Peckham

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

Esperanto Unveils ML Chip with Nearly 1,100 RISC-V Cores

December 8, 2020

At the RISC-V Summit today, Art Swift, CEO of Esperanto Technologies, announced a new, RISC-V based chip aimed at machine learning and containing nearly 1,100 low-power cores based on the open-source RISC-V architecture. Esperanto Technologies, headquartered in... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

By Tracey Bryant

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

By Oliver Peckham

Intel Xe-HP GPU Deployed for Aurora Exascale Development

November 17, 2020

At SC20, Intel announced that it is making its Xe-HP high performance discrete GPUs available to early access developers. Notably, the new chips have been deplo Read more…

By Tiffany Trader

Intel Teases Ice Lake-SP, Shows Competitive Benchmarking

November 17, 2020

At SC20 this week, Intel teased its forthcoming third-generation Xeon "Ice Lake-SP" server processor, claiming competitive benchmarking results against AMD's second-generation Epyc "Rome" processor. Ice Lake-SP, Intel's first server processor with 10nm technology... Read more…

By Tiffany Trader

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

By Tiffany Trader

It’s Fugaku vs. COVID-19: How the World’s Top Supercomputer Is Shaping Our New Normal

November 9, 2020

Fugaku is currently the most powerful publicly ranked supercomputer in the world – but we weren’t supposed to have it yet. The supercomputer, situated at Japan’s Riken scientific research institute, was scheduled to come online in 2021. When the pandemic struck... Read more…

By Oliver Peckham

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire