SC21 Keynote: Internet Pioneer Vint Cerf on Shakespeare, Chatbots, and Being Human

By John Russell

November 17, 2021

Unlike the deep technical dives of many SC keynotes, Internet pioneer Vint Cerf steered clear of the trenches and took a leisurely stroll through a range of human-machine interactions, touching on ML’s growing capabilities while noting potholes to be avoided if possible. Cerf, of course, is co-designer with Bob Kahn of the TCP/IP protocols and architecture of the internet. He’s heralded as one of the fathers of the Internet and today is vice president and chief evangelist for Google. (Brief bio at end of the article).

“[The] humanities ask a very simple question – what does it mean to be human? So, we try to answer that question. We study, music and poetry, we study art, languages, [and] history to try to understand how humans affect the flow of history, how their decisions and preferences and excitement and joy and anger and everything else. Then we assume that those palpable expressions shown in text and art are somehow telling us more than just simple biology. Surely, we are more than just our DNA, at least, I hope so,” said Cerf, sharing credit for that formulation of humanities goal with colleague Mike Whitmore, director of the Folger Shakespeare Library, in Washington.

Vint Cerf, Google

The broad question posed by Cerf in his SC21 keynote (Computing and the Humanities) is how and to what extent can human-computer interactions contribute to the humanities. Language, visual art, critical thinking all made their way into Cerf’s presentation. The implied question, not answered but frequently hinted at, is to what extent will computers be tools, assistants, partners, or masters in humanities.

He began with a cautionary tale using Shakespeare’s sonnet 73 as the example in which a computer system trained on the Bard’s works is presented with an unfinished fragment of the original sonnet 73; as you may have guessed it’s able to (mostly) generate the missing text, off by just one word.

“The point I want to make is that this wasn’t simply a thing that did string matching and then plucked out the rest of the Sonnet. This is generated, based on statistical information, almost what Shakespeare wrote. The reason that’s interesting is that if we chose to provide some other preambles that were not written by Shakespeare, the system would still try its best to produce a statistically valid conclusion to the rest of the sonnet,” said Cerf. “There might be a time when you could, if you were skilled enough, you might be able to write something which is very Shakespearean at the beginning, and then let the system produce the rest of it, which you could then discover miraculously as a Shakespeare piece that no one had found before, and take a picture of and sell it as an NFT for $69 million.”

“Let’s start our adventure [by] recognizing that artificial intelligence and particularly machine learning is allowing us to experience and explore and analyze text in ways that we couldn’t before,” said Cerf. “Some people are feeling a little threatened by, by these kinds of capabilities. For example, the possibility of creating what some people will call deep fakes, whether that’s imagery, or text, which looks very credible. If you think a little bit, you’ve probably seen some websites where you can go to the website, and it produces a picture of a person, except that that person never existed. But the person looks like a real person. Why does it look like a real person? Well, it’s because the features of the image are drawn from a statistical collection of data about faces, that matches our expectations of how faces are put together.”

“We should be worried about things like that, especially now that we seem to be living in what some people call a post factual world where alternative facts seem to be just as credible as the real ones. [T]he other thing we should be doing is teaching kids how to think critically about what they experience in the online world. I’m not quite at the point where I’m arguing for an internet driver’s license. But you know how people get driver’s ed classes when they’re in high school, maybe we should start Internet ed classes even back in elementary school,” said Cerf.

Back to Shakespeare and exploring language.

“Michael was interested in understanding the role that words have in the Shakespeare plays, and [it turns out] their frequency of use in the plays can tell us a little bit about the various genres of the plays through tragedies, histories and comedies. Now this was for me, anyway, totally unexpected,” said Cerf.

“The system just looked for the frequency of the use of the word if, and, and but in this case. [Michael] did that for all of the plays of Shakespeare, and then tried to represent where they showed up in this multi-dimensional space [and] their relationships to all the words of the plays.”

Cerf took a stab at explaining why and was more prominent in histories while if and but were more frequent in tragedies and comedies.

“I want you to think for a minute about a Shakespeare play, and remember, the Globe Theater, people would go and they’d stand in the middle of the theater and watch what was happening on the stage. The stages were fairly small, this is bigger than the Globe Theatre’s stage,” said Cerf. “So, imagine that it’s a history play and you have to convince the audience that they are seeing the vast landscape wars and battles are happening. In order for him [Shakespeare] to paint with words, what was going on in the scene, you had to say, “and this forest is here, and this building is there, and that army was over here.” That was needed to paint a big landscape for people who were either reading or watching the plays. So that’s how the word and shows up in such proliferation.”

Conditional conjunctions, argued Cerf, are necessary language elements in presenting comedy and tragedy

If and but turned out to be conditional, and they’re trying to juxtapose situations against each other, and often the comedies are based on this sort of collision of ‘conditionalities.’ You know, if this were only true, then that would happen. Or “if you but love me, then you know, the world would be better.” So, there are a whole series of relationships that get generated in the comedies, using if and but as sort of the fulcrum around which to develop the play’s plot. I never would have imagined that if it weren’t for the fact that he [Whitmore] showed this relationship in that 3D space.”

But, as is often the case in Shakespeare’s works, things can go amiss.

“You’re feeling pretty good about your theory. You do the experiment, and you get bang-on everything is exactly right, except for this point over here. Now, there are only two kinds of scientists, right? There’s one that looks at that point and says it’s measurement error and ignores it. But then there’s the other scientist who says, Huh, that’s funny, and then goes off to try to figure out what that point is doing there. And that’s the one that gets a Nobel Prize. So, I hope you’re that kind. In any case, here’s this wonderful theory, we separated all the plays with if, and, and but except that Othello shows up in the comedy space based on that metric. [It] has the structure of a comedy but those of you remember reading it, in high school maybe, will know that it is anything but that.”

“So the theory isn’t perfect. I actually don’t have an answer for you about why this structure in Othello, which says it should have been a comedy is, in fact, a tragedy. But the fact is that the theory isn’t perfect, but it’s amazingly effective,” said Cerf. That sounds like AI writ large, which was (I think) Cerf’s point.

Cerf then switched gears, moving from Shakespeare and clustering by a few simple words to using semantic relationships as a powerful but also imperfect tool with which ML can tackle language problems.

“The interesting thing about word relationships is that words occupy a very high dimensional space, and the semantic meaning of words is very high in dimensionality. One interesting question is whether languages, different languages, can be mapped into a semantic space and the relationship among the languages exhibited by that mapping,” said Cerf. “This is probably obvious to many of you, but when the machine is doing this kind of machine learning and mapping, it’s not doing linguistics and parsing of sentences and things like that. It’s really trying to associate words with their semantic meaning, and their relationship to other words.”

“You’ve heard of these things called generative adversarial networks. You take an image, and you train the system to recognize the image, it’s a cat. You and I would look at it; it looks like a cat. It’s got little, you know, triangular ears, it’s got a big furry tail,” said Cerf. “And the generative adversarial system goes in and tries to find a few pixels to change in the image. Now we look at the image, and it looks exactly the same as it did before because only a few pixels change, [but] the system tells you it’s a firetruck. Your reaction to this is sort of WTF.”

“The answer to [why this happened] is that the way the system was trained, it took all of the things that were supposed to be catlike and separated those from things that were not cat with hyperplanes in a high dimensional space. Fiddling with the pixels might actually cause a point in the space, based on the machine learning model, to move across one of the hyperplanes into some other recognizable space, in this case, maybe a firetruck,” he said “This system is not seeing the same way we do, but it has the interesting ability of seeing relationships that we might not be able to recognize,” he said.

Cerf cited an example of looking at three different languages and clustering things in those languages that are semantically similar. “The thing that was really quite fascinating is that you can see over here that in English, Korean and Japanese, the same set of words are the same semantically similar words [and] cluster in the same semantic space, even though the words themselves are not the same,” he said.

“The idea that we can do this means that when we’re doing language translation, and if we have enough content to train the system, it’s possible to get the system to translate from one language to another because of the semantic similarity that has been mapped into these spaces. Google now makes use of that machine learning capability in order to translate over 100 languages into each other. I have to say the quality of the translations will vary depending on how much training material was available of purportedly identical documents written in different languages that should, in theory, have the same semantic content,” he said.

A theme running through Cerf’s talk was the tremendous power of computational ML tools is also somewhat brittle. These approaches are powerful and can answer many questions perform many tasks that are difficult. But statistics underlie virtually all of machine learning’s abilities and choices and actions, not true learning. Still, he didn’t think this derivative, backward-looking approach meant such systems couldn’t innovate. He showed some lovely derivative artwork resembling Van Gogh’s painting style. He cited Deep Mind’s success at beating Go masters and learning chess on its own as well.

He also singled out progress in computationally solving 3D protein structure. “At DeepMind, they were very interested in the protein folding problem. Some of you may have some experience with that. They recently announced that, I think I have this right, but there’s something like 88,000 proteins that are generated by human DNA, and they’ve done the folding for that, [with] maybe 95 percent or more of the folding calculated by the machine learning algorithms,” he said, but added that much of the model training came from public project with “people who were playing around trying to figure out what the folding could be.”

It was a fascinating talk. Cerf presented a sort of three quarters glass-full/ one quarter empty perspective that emphasized the power of the computational tools but with limits and risks, the latter more around excessive dependence than anything else.

“I’m a little nervous about the idea that the machine may read a lot of books and tell us how to live. Because it feels like we shouldn’t be doing what the machine tells us to do. Some of you, however, might have heard of a novella that was written in 1909. It’s called The Machine Stops. It was written by E.M Forster. And I would urge you to read. It begins with a society that lives at home. All the food is delivered to the people at home. They interact with each other remotely somehow. He doesn’t say how. Boy, does this ever sound like everybody online on Zoom. And then one day the machine stops working. And the question is what happens to that society,” said Cerf.

Don’t panic yet. What computers can actually understand today is limited (he would probably say none at all).  To make the point, he presented an interaction he had with a Google chatbot, Tina:

“Tina [is] a bot that has absorbed billions of words of content. I tried to have a conversation with this little bot. I was trying to figure out if it could learn anything, and so I wanted Tina to put little emojis at the end of its responses to me. I said, “Can you put this emoji at the end? It said “No problem I can do that. It shouldn’t be a problem at all.” And then it doesn’t do it. So, then I complain about it and it says, “Well, I missed out on sending you that. It won’t happen again, at least I don’t think it will. Sorry for the mishap.”

“Now, what’s going on here is that the computer is absorbing my text and it’s responding to the text based on all the examples it has of human discourse. But it hasn’t got a clue what’s actually going on. There’s no real knowledge being transferred,” he said. “I’m sitting here. We’re having this interaction and it’s not succeeding very well, but the interactions sound plausible. Here it’s apologizing all over the place. So apparently it learned all about apologetics. I mean, if you were looking for a way to apologize for failing to do something, this bot might be very helpful. Well, of course, my point here was that if it actually did learn how to respond, it would mean that it actually is understanding what it was I was saying. And, of course, it didn’t.”

How much technology is too much? During Q&A, Cerf was asked, “So the same way that we have parks to preserve nature, what aspects of life should remain unspoiled by technology?”

Cerf responded, “Boy, you’d have to go a long ways away to avoid technology. Of course, it’s a vaguely pejorative question. The assumption is that the technology spoils life. I want to refute that argument and say that technology has made our lives easier in many respects. To the first order, I think technology has offered us enormous benefits. Look how some of our survival during the pandemic has been dependent very much on technology, whether it’s the creation of the vaccines or our ability to interact remotely. On the other hand, I’d be the first guy to admit that sometimes technology is its own disease and we need to keep that in mind.”

Short Cerf Bio

Vinton G. Cerf is vice president and Chief Internet Evangelist for Google. He contributes to global policy development and continued spread of the Internet. Widely known as one of the “Fathers of the Internet,” Cerf is the co-designer of the TCP/IP protocols and the architecture of the Internet. He has served in executive positions at MCI, the Corporation for National Research Initiatives and the Defense Advanced Research Projects Agency and on the faculty of Stanford University.

Vint Cerf served as chairman of the board of the Internet Corporation for Assigned Names and Numbers (ICANN) from 2000-2007 and has been a Visiting Scientist at the Jet Propulsion Laboratory since 1998. Cerf served as founding president of the Internet Society (ISOC) from 1992-1995. Cerf is a Foreign Member of the British Royal Society and Swedish Academy of Engineering, and Fellow of IEEE, ACM, and American Association for the Advancement of Science, the American Academy of Arts and Sciences, the International Engineering Consortium, the Computer History Museum, the British Computer Society, the Worshipful Company of Information Technologists, the Worshipful Company of Stationers and a member of the National Academy of Engineering. He currently serves as Past President of the Association for Computing Machinery, chairman of the American Registry for Internet Numbers (ARIN) and completed a term as Chairman of the Visiting Committee on Advanced Technology for the US National Institute of Standards and Technology. President Obama appointed him to the National Science Board in 2012.

Cerf is a recipient of numerous awards and commendations in connection with his work on the internet, including the U.S. Presidential Medal of Freedom, U.S. National Medal of Technology, the Queen Elizabeth Prize for Engineering, the Prince of Asturias Award, the Tunisian National Medal of Science, the Japan Prize, the Charles Stark Draper award, the ACM Turing Award, Officer of the Legion d’Honneur and 29 honorary degrees. In December 1994, People magazine identified Cerf as one of that year’s “25 Most Intriguing People.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SC21 Was Unlike Any Other — Was That a Good Thing?

December 3, 2021

For a long time, the promised in-person SC21 seemed like an impossible fever dream, the assurances of a prominent physical component persisting across years of canceled conferences, including two virtual ISCs and the virtual SC20. With the advent of the Delta variant, Covid surges in St. Louis and contention over vaccine requirements... Read more…

The Green500’s Crystal Anniversary Sees MN-3 Crystallize Its Winning Streak

December 2, 2021

“This is the 30th Green500,” said Wu Feng, custodian of the Green500 list, at the list’s SC21 birds-of-a-feather session. “You could say 15 years of Green500, which makes it, I guess, the crystal anniversary.” Indeed, HPCwire marked the 15th anniversary of the Green500 – which ranks supercomputers by flops-per-watt, rather than just by flops – earlier this year with... Read more…

AWS Arm-based Graviton3 Instances Now in Preview

December 1, 2021

Three years after unveiling the first generation of its AWS Graviton chip-powered instances in 2018, Amazon Web Services announced that the third generation of the processors – the AWS Graviton3 – will power all-new Amazon Elastic Compute 2 (EC2) C7g instances that are now available in preview. Debuting at the AWS re:Invent 2021... Read more…

Nvidia Dominates Latest MLPerf Results but Competitors Start Speaking Up

December 1, 2021

MLCommons today released its fifth round of MLPerf training benchmark results with Nvidia GPUs again dominating. That said, a few other AI accelerator companies participated and, one of them, Graphcore, even held a separ Read more…

HPC Career Notes: December 2021 Edition

December 1, 2021

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ Read more…

AWS Solution Channel

Running a 3.2M vCPU HPC Workload on AWS with YellowDog

Historically, advances in fields such as meteorology, healthcare, and engineering, were achieved through large investments in on-premises computing infrastructure. Upfront capital investment and operational complexity have been the accepted norm of large-scale HPC research. Read more…

At SC21, Experts Ask: Can Fast HPC Be Green?

November 30, 2021

HPC is entering a new era: exascale is (somewhat) officially here, but Moore’s law is ending. Power consumption and other sustainability concerns loom over the enormous systems and chips of this new epoch, for both cost and compliance reasons. Reconciling the need to continue the supercomputer scale-up while reducing HPC’s environmental impacts... Read more…

SC21 Was Unlike Any Other — Was That a Good Thing?

December 3, 2021

For a long time, the promised in-person SC21 seemed like an impossible fever dream, the assurances of a prominent physical component persisting across years of canceled conferences, including two virtual ISCs and the virtual SC20. With the advent of the Delta variant, Covid surges in St. Louis and contention over vaccine requirements... Read more…

The Green500’s Crystal Anniversary Sees MN-3 Crystallize Its Winning Streak

December 2, 2021

“This is the 30th Green500,” said Wu Feng, custodian of the Green500 list, at the list’s SC21 birds-of-a-feather session. “You could say 15 years of Green500, which makes it, I guess, the crystal anniversary.” Indeed, HPCwire marked the 15th anniversary of the Green500 – which ranks supercomputers by flops-per-watt, rather than just by flops – earlier this year with... Read more…

Nvidia Dominates Latest MLPerf Results but Competitors Start Speaking Up

December 1, 2021

MLCommons today released its fifth round of MLPerf training benchmark results with Nvidia GPUs again dominating. That said, a few other AI accelerator companies Read more…

At SC21, Experts Ask: Can Fast HPC Be Green?

November 30, 2021

HPC is entering a new era: exascale is (somewhat) officially here, but Moore’s law is ending. Power consumption and other sustainability concerns loom over the enormous systems and chips of this new epoch, for both cost and compliance reasons. Reconciling the need to continue the supercomputer scale-up while reducing HPC’s environmental impacts... Read more…

Raja Koduri and Satoshi Matsuoka Discuss the Future of HPC at SC21

November 29, 2021

HPCwire's Managing Editor sits down with Intel's Raja Koduri and Riken's Satoshi Matsuoka in St. Louis for an off-the-cuff conversation about their SC21 experience, what comes after exascale and why they are collaborating. Koduri, senior vice president and general manager of Intel's accelerated computing systems and graphics (AXG) group, leads the team... Read more…

Jack Dongarra on SC21, the Top500 and His Retirement Plans

November 29, 2021

HPCwire's Managing Editor sits down with Jack Dongarra, Top500 co-founder and Distinguished Professor at the University of Tennessee, during SC21 in St. Louis to discuss the 2021 Top500 list, the outlook for global exascale computing, and what exactly is going on in that Viking helmet photo. Read more…

SC21: Larry Smarr on The Rise of Supernetwork Data Intensive Computing

November 26, 2021

Larry Smarr, founding director of Calit2 (now Distinguished Professor Emeritus at the University of California San Diego) and the first director of NCSA, is one of the seminal figures in the U.S. supercomputing community. What began as a personal drive, shared by others, to spur the creation of supercomputers in the U.S. for scientific use, later expanded into a... Read more…

Three Chinese Exascale Systems Detailed at SC21: Two Operational and One Delayed

November 24, 2021

Details about two previously rumored Chinese exascale systems came to light during last week’s SC21 proceedings. Asked about these systems during the Top500 media briefing on Monday, Nov. 15, list author and co-founder Jack Dongarra indicated he was aware of some very impressive results, but withheld comment when asked directly if he had... Read more…

IonQ Is First Quantum Startup to Go Public; Will It be First to Deliver Profits?

November 3, 2021

On October 1 of this year, IonQ became the first pure-play quantum computing start-up to go public. At this writing, the stock (NYSE: IONQ) was around $15 and its market capitalization was roughly $2.89 billion. Co-founder and chief scientist Chris Monroe says it was fun to have a few of the company’s roughly 100 employees travel to New York to ring the opening bell of the New York Stock... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

AMD Launches Milan-X CPU with 3D V-Cache and Multichip Instinct MI200 GPU

November 8, 2021

At a virtual event this morning, AMD CEO Lisa Su unveiled the company’s latest and much-anticipated server products: the new Milan-X CPU, which leverages AMD’s new 3D V-Cache technology; and its new Instinct MI200 GPU, which provides up to 220 compute units across two Infinity Fabric-connected dies, delivering an astounding 47.9 peak double-precision teraflops. “We're in a high-performance computing megacycle, driven by the growing need to deploy additional compute performance... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

Leading Solution Providers

Contributors

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

Three Chinese Exascale Systems Detailed at SC21: Two Operational and One Delayed

November 24, 2021

Details about two previously rumored Chinese exascale systems came to light during last week’s SC21 proceedings. Asked about these systems during the Top500 media briefing on Monday, Nov. 15, list author and co-founder Jack Dongarra indicated he was aware of some very impressive results, but withheld comment when asked directly if he had... Read more…

2021 Gordon Bell Prize Goes to Exascale-Powered Quantum Supremacy Challenge

November 18, 2021

Today at the hybrid virtual/in-person SC21 conference, the organizers announced the winners of the 2021 ACM Gordon Bell Prize: a team of Chinese researchers leveraging the new exascale Sunway system to simulate quantum circuits. The Gordon Bell Prize, which comes with an award of $10,000 courtesy of HPC pioneer Gordon Bell, is awarded annually... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire