SC21 Keynote: Internet Pioneer Vint Cerf on Shakespeare, Chatbots, and Being Human

By John Russell

November 17, 2021

Unlike the deep technical dives of many SC keynotes, Internet pioneer Vint Cerf steered clear of the trenches and took a leisurely stroll through a range of human-machine interactions, touching on ML’s growing capabilities while noting potholes to be avoided if possible. Cerf, of course, is co-designer with Bob Kahn of the TCP/IP protocols and architecture of the internet. He’s heralded as one of the fathers of the Internet and today is vice president and chief evangelist for Google. (Brief bio at end of the article).

“[The] humanities ask a very simple question – what does it mean to be human? So, we try to answer that question. We study, music and poetry, we study art, languages, [and] history to try to understand how humans affect the flow of history, how their decisions and preferences and excitement and joy and anger and everything else. Then we assume that those palpable expressions shown in text and art are somehow telling us more than just simple biology. Surely, we are more than just our DNA, at least, I hope so,” said Cerf, sharing credit for that formulation of humanities goal with colleague Mike Whitmore, director of the Folger Shakespeare Library, in Washington.

Vint Cerf, Google

The broad question posed by Cerf in his SC21 keynote (Computing and the Humanities) is how and to what extent can human-computer interactions contribute to the humanities. Language, visual art, critical thinking all made their way into Cerf’s presentation. The implied question, not answered but frequently hinted at, is to what extent will computers be tools, assistants, partners, or masters in humanities.

He began with a cautionary tale using Shakespeare’s sonnet 73 as the example in which a computer system trained on the Bard’s works is presented with an unfinished fragment of the original sonnet 73; as you may have guessed it’s able to (mostly) generate the missing text, off by just one word.

“The point I want to make is that this wasn’t simply a thing that did string matching and then plucked out the rest of the Sonnet. This is generated, based on statistical information, almost what Shakespeare wrote. The reason that’s interesting is that if we chose to provide some other preambles that were not written by Shakespeare, the system would still try its best to produce a statistically valid conclusion to the rest of the sonnet,” said Cerf. “There might be a time when you could, if you were skilled enough, you might be able to write something which is very Shakespearean at the beginning, and then let the system produce the rest of it, which you could then discover miraculously as a Shakespeare piece that no one had found before, and take a picture of and sell it as an NFT for $69 million.”

“Let’s start our adventure [by] recognizing that artificial intelligence and particularly machine learning is allowing us to experience and explore and analyze text in ways that we couldn’t before,” said Cerf. “Some people are feeling a little threatened by, by these kinds of capabilities. For example, the possibility of creating what some people will call deep fakes, whether that’s imagery, or text, which looks very credible. If you think a little bit, you’ve probably seen some websites where you can go to the website, and it produces a picture of a person, except that that person never existed. But the person looks like a real person. Why does it look like a real person? Well, it’s because the features of the image are drawn from a statistical collection of data about faces, that matches our expectations of how faces are put together.”

“We should be worried about things like that, especially now that we seem to be living in what some people call a post factual world where alternative facts seem to be just as credible as the real ones. [T]he other thing we should be doing is teaching kids how to think critically about what they experience in the online world. I’m not quite at the point where I’m arguing for an internet driver’s license. But you know how people get driver’s ed classes when they’re in high school, maybe we should start Internet ed classes even back in elementary school,” said Cerf.

Back to Shakespeare and exploring language.

“Michael was interested in understanding the role that words have in the Shakespeare plays, and [it turns out] their frequency of use in the plays can tell us a little bit about the various genres of the plays through tragedies, histories and comedies. Now this was for me, anyway, totally unexpected,” said Cerf.

“The system just looked for the frequency of the use of the word if, and, and but in this case. [Michael] did that for all of the plays of Shakespeare, and then tried to represent where they showed up in this multi-dimensional space [and] their relationships to all the words of the plays.”

Cerf took a stab at explaining why and was more prominent in histories while if and but were more frequent in tragedies and comedies.

“I want you to think for a minute about a Shakespeare play, and remember, the Globe Theater, people would go and they’d stand in the middle of the theater and watch what was happening on the stage. The stages were fairly small, this is bigger than the Globe Theatre’s stage,” said Cerf. “So, imagine that it’s a history play and you have to convince the audience that they are seeing the vast landscape wars and battles are happening. In order for him [Shakespeare] to paint with words, what was going on in the scene, you had to say, “and this forest is here, and this building is there, and that army was over here.” That was needed to paint a big landscape for people who were either reading or watching the plays. So that’s how the word and shows up in such proliferation.”

Conditional conjunctions, argued Cerf, are necessary language elements in presenting comedy and tragedy

If and but turned out to be conditional, and they’re trying to juxtapose situations against each other, and often the comedies are based on this sort of collision of ‘conditionalities.’ You know, if this were only true, then that would happen. Or “if you but love me, then you know, the world would be better.” So, there are a whole series of relationships that get generated in the comedies, using if and but as sort of the fulcrum around which to develop the play’s plot. I never would have imagined that if it weren’t for the fact that he [Whitmore] showed this relationship in that 3D space.”

But, as is often the case in Shakespeare’s works, things can go amiss.

“You’re feeling pretty good about your theory. You do the experiment, and you get bang-on everything is exactly right, except for this point over here. Now, there are only two kinds of scientists, right? There’s one that looks at that point and says it’s measurement error and ignores it. But then there’s the other scientist who says, Huh, that’s funny, and then goes off to try to figure out what that point is doing there. And that’s the one that gets a Nobel Prize. So, I hope you’re that kind. In any case, here’s this wonderful theory, we separated all the plays with if, and, and but except that Othello shows up in the comedy space based on that metric. [It] has the structure of a comedy but those of you remember reading it, in high school maybe, will know that it is anything but that.”

“So the theory isn’t perfect. I actually don’t have an answer for you about why this structure in Othello, which says it should have been a comedy is, in fact, a tragedy. But the fact is that the theory isn’t perfect, but it’s amazingly effective,” said Cerf. That sounds like AI writ large, which was (I think) Cerf’s point.

Cerf then switched gears, moving from Shakespeare and clustering by a few simple words to using semantic relationships as a powerful but also imperfect tool with which ML can tackle language problems.

“The interesting thing about word relationships is that words occupy a very high dimensional space, and the semantic meaning of words is very high in dimensionality. One interesting question is whether languages, different languages, can be mapped into a semantic space and the relationship among the languages exhibited by that mapping,” said Cerf. “This is probably obvious to many of you, but when the machine is doing this kind of machine learning and mapping, it’s not doing linguistics and parsing of sentences and things like that. It’s really trying to associate words with their semantic meaning, and their relationship to other words.”

“You’ve heard of these things called generative adversarial networks. You take an image, and you train the system to recognize the image, it’s a cat. You and I would look at it; it looks like a cat. It’s got little, you know, triangular ears, it’s got a big furry tail,” said Cerf. “And the generative adversarial system goes in and tries to find a few pixels to change in the image. Now we look at the image, and it looks exactly the same as it did before because only a few pixels change, [but] the system tells you it’s a firetruck. Your reaction to this is sort of WTF.”

“The answer to [why this happened] is that the way the system was trained, it took all of the things that were supposed to be catlike and separated those from things that were not cat with hyperplanes in a high dimensional space. Fiddling with the pixels might actually cause a point in the space, based on the machine learning model, to move across one of the hyperplanes into some other recognizable space, in this case, maybe a firetruck,” he said “This system is not seeing the same way we do, but it has the interesting ability of seeing relationships that we might not be able to recognize,” he said.

Cerf cited an example of looking at three different languages and clustering things in those languages that are semantically similar. “The thing that was really quite fascinating is that you can see over here that in English, Korean and Japanese, the same set of words are the same semantically similar words [and] cluster in the same semantic space, even though the words themselves are not the same,” he said.

“The idea that we can do this means that when we’re doing language translation, and if we have enough content to train the system, it’s possible to get the system to translate from one language to another because of the semantic similarity that has been mapped into these spaces. Google now makes use of that machine learning capability in order to translate over 100 languages into each other. I have to say the quality of the translations will vary depending on how much training material was available of purportedly identical documents written in different languages that should, in theory, have the same semantic content,” he said.

A theme running through Cerf’s talk was the tremendous power of computational ML tools is also somewhat brittle. These approaches are powerful and can answer many questions perform many tasks that are difficult. But statistics underlie virtually all of machine learning’s abilities and choices and actions, not true learning. Still, he didn’t think this derivative, backward-looking approach meant such systems couldn’t innovate. He showed some lovely derivative artwork resembling Van Gogh’s painting style. He cited Deep Mind’s success at beating Go masters and learning chess on its own as well.

He also singled out progress in computationally solving 3D protein structure. “At DeepMind, they were very interested in the protein folding problem. Some of you may have some experience with that. They recently announced that, I think I have this right, but there’s something like 88,000 proteins that are generated by human DNA, and they’ve done the folding for that, [with] maybe 95 percent or more of the folding calculated by the machine learning algorithms,” he said, but added that much of the model training came from public project with “people who were playing around trying to figure out what the folding could be.”

It was a fascinating talk. Cerf presented a sort of three quarters glass-full/ one quarter empty perspective that emphasized the power of the computational tools but with limits and risks, the latter more around excessive dependence than anything else.

“I’m a little nervous about the idea that the machine may read a lot of books and tell us how to live. Because it feels like we shouldn’t be doing what the machine tells us to do. Some of you, however, might have heard of a novella that was written in 1909. It’s called The Machine Stops. It was written by E.M Forster. And I would urge you to read. It begins with a society that lives at home. All the food is delivered to the people at home. They interact with each other remotely somehow. He doesn’t say how. Boy, does this ever sound like everybody online on Zoom. And then one day the machine stops working. And the question is what happens to that society,” said Cerf.

Don’t panic yet. What computers can actually understand today is limited (he would probably say none at all).  To make the point, he presented an interaction he had with a Google chatbot, Tina:

“Tina [is] a bot that has absorbed billions of words of content. I tried to have a conversation with this little bot. I was trying to figure out if it could learn anything, and so I wanted Tina to put little emojis at the end of its responses to me. I said, “Can you put this emoji at the end? It said “No problem I can do that. It shouldn’t be a problem at all.” And then it doesn’t do it. So, then I complain about it and it says, “Well, I missed out on sending you that. It won’t happen again, at least I don’t think it will. Sorry for the mishap.”

“Now, what’s going on here is that the computer is absorbing my text and it’s responding to the text based on all the examples it has of human discourse. But it hasn’t got a clue what’s actually going on. There’s no real knowledge being transferred,” he said. “I’m sitting here. We’re having this interaction and it’s not succeeding very well, but the interactions sound plausible. Here it’s apologizing all over the place. So apparently it learned all about apologetics. I mean, if you were looking for a way to apologize for failing to do something, this bot might be very helpful. Well, of course, my point here was that if it actually did learn how to respond, it would mean that it actually is understanding what it was I was saying. And, of course, it didn’t.”

How much technology is too much? During Q&A, Cerf was asked, “So the same way that we have parks to preserve nature, what aspects of life should remain unspoiled by technology?”

Cerf responded, “Boy, you’d have to go a long ways away to avoid technology. Of course, it’s a vaguely pejorative question. The assumption is that the technology spoils life. I want to refute that argument and say that technology has made our lives easier in many respects. To the first order, I think technology has offered us enormous benefits. Look how some of our survival during the pandemic has been dependent very much on technology, whether it’s the creation of the vaccines or our ability to interact remotely. On the other hand, I’d be the first guy to admit that sometimes technology is its own disease and we need to keep that in mind.”

Short Cerf Bio

Vinton G. Cerf is vice president and Chief Internet Evangelist for Google. He contributes to global policy development and continued spread of the Internet. Widely known as one of the “Fathers of the Internet,” Cerf is the co-designer of the TCP/IP protocols and the architecture of the Internet. He has served in executive positions at MCI, the Corporation for National Research Initiatives and the Defense Advanced Research Projects Agency and on the faculty of Stanford University.

Vint Cerf served as chairman of the board of the Internet Corporation for Assigned Names and Numbers (ICANN) from 2000-2007 and has been a Visiting Scientist at the Jet Propulsion Laboratory since 1998. Cerf served as founding president of the Internet Society (ISOC) from 1992-1995. Cerf is a Foreign Member of the British Royal Society and Swedish Academy of Engineering, and Fellow of IEEE, ACM, and American Association for the Advancement of Science, the American Academy of Arts and Sciences, the International Engineering Consortium, the Computer History Museum, the British Computer Society, the Worshipful Company of Information Technologists, the Worshipful Company of Stationers and a member of the National Academy of Engineering. He currently serves as Past President of the Association for Computing Machinery, chairman of the American Registry for Internet Numbers (ARIN) and completed a term as Chairman of the Visiting Committee on Advanced Technology for the US National Institute of Standards and Technology. President Obama appointed him to the National Science Board in 2012.

Cerf is a recipient of numerous awards and commendations in connection with his work on the internet, including the U.S. Presidential Medal of Freedom, U.S. National Medal of Technology, the Queen Elizabeth Prize for Engineering, the Prince of Asturias Award, the Tunisian National Medal of Science, the Japan Prize, the Charles Stark Draper award, the ACM Turing Award, Officer of the Legion d’Honneur and 29 honorary degrees. In December 1994, People magazine identified Cerf as one of that year’s “25 Most Intriguing People.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

HPC Pioneer Gordon Bell Passed Away

May 22, 2024

Legendary computer scientist Gordon Bell passed away last Friday at his home in Coronado, CA. He was 89. The New York Times has a nice tribute piece. A long-time pioneer with Digital Equipment Corp, he pushed hard for de Read more…

ISC 2024 — A Few Quantum Gems and Slides from a Packed QC Agenda

May 22, 2024

If you were looking for quantum computing content, ISC 2024 was a good place to be last week — there were around 20 quantum computing related sessions. QC even earned a slide in Kathy Yelick’s opening keynote — Bey Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Read more…

Core42 Is Building Its 172 Million-core AI Supercomputer in Texas

May 20, 2024

UAE-based Core42 is building an AI supercomputer with 172 million cores which will become operational later this year. The system, Condor Galaxy 3, was announced earlier this year and will have 192 nodes with Cerebras Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's latest weapon in the AI battle with GPU maker Nvidia and clou Read more…

ISC 2024 Student Cluster Competition

May 16, 2024

The 2024 ISC 2024 competition welcomed 19 virtual (remote) and eight in-person teams. The in-person teams participated in the conference venue and, while the virtual teams competed using the Bridges-2 supercomputers at t Read more…

ISC 2024 — A Few Quantum Gems and Slides from a Packed QC Agenda

May 22, 2024

If you were looking for quantum computing content, ISC 2024 was a good place to be last week — there were around 20 quantum computing related sessions. QC eve Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can un Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

ISC 2024 Keynote: High-precision Computing Will Be a Foundation for AI Models

May 15, 2024

Some scientific computing applications cannot sacrifice accuracy and will always require high-precision computing. Therefore, conventional high-performance c Read more…

Shutterstock 493860193

Linux Foundation Announces the Launch of the High-Performance Software Foundation

May 14, 2024

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, is excited to announce the launch of the High-Performance Softw Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…


How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers


Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

  • arrow
  • Click Here for More Headlines
  • arrow