Unlike the deep technical dives of many SC keynotes, Internet pioneer Vint Cerf steered clear of the trenches and took a leisurely stroll through a range of human-machine interactions, touching on ML’s growing capabilities while noting potholes to be avoided if possible. Cerf, of course, is co-designer with Bob Kahn of the TCP/IP protocols and architecture of the internet. He’s heralded as one of the fathers of the Internet and today is vice president and chief evangelist for Google. (Brief bio at end of the article).
“[The] humanities ask a very simple question – what does it mean to be human? So, we try to answer that question. We study, music and poetry, we study art, languages, [and] history to try to understand how humans affect the flow of history, how their decisions and preferences and excitement and joy and anger and everything else. Then we assume that those palpable expressions shown in text and art are somehow telling us more than just simple biology. Surely, we are more than just our DNA, at least, I hope so,” said Cerf, sharing credit for that formulation of humanities goal with colleague Mike Whitmore, director of the Folger Shakespeare Library, in Washington.
The broad question posed by Cerf in his SC21 keynote (Computing and the Humanities) is how and to what extent can human-computer interactions contribute to the humanities. Language, visual art, critical thinking all made their way into Cerf’s presentation. The implied question, not answered but frequently hinted at, is to what extent will computers be tools, assistants, partners, or masters in humanities.
He began with a cautionary tale using Shakespeare’s sonnet 73 as the example in which a computer system trained on the Bard’s works is presented with an unfinished fragment of the original sonnet 73; as you may have guessed it’s able to (mostly) generate the missing text, off by just one word.
“The point I want to make is that this wasn’t simply a thing that did string matching and then plucked out the rest of the Sonnet. This is generated, based on statistical information, almost what Shakespeare wrote. The reason that’s interesting is that if we chose to provide some other preambles that were not written by Shakespeare, the system would still try its best to produce a statistically valid conclusion to the rest of the sonnet,” said Cerf. “There might be a time when you could, if you were skilled enough, you might be able to write something which is very Shakespearean at the beginning, and then let the system produce the rest of it, which you could then discover miraculously as a Shakespeare piece that no one had found before, and take a picture of and sell it as an NFT for $69 million.”
“Let’s start our adventure [by] recognizing that artificial intelligence and particularly machine learning is allowing us to experience and explore and analyze text in ways that we couldn’t before,” said Cerf. “Some people are feeling a little threatened by, by these kinds of capabilities. For example, the possibility of creating what some people will call deep fakes, whether that’s imagery, or text, which looks very credible. If you think a little bit, you’ve probably seen some websites where you can go to the website, and it produces a picture of a person, except that that person never existed. But the person looks like a real person. Why does it look like a real person? Well, it’s because the features of the image are drawn from a statistical collection of data about faces, that matches our expectations of how faces are put together.”
“We should be worried about things like that, especially now that we seem to be living in what some people call a post factual world where alternative facts seem to be just as credible as the real ones. [T]he other thing we should be doing is teaching kids how to think critically about what they experience in the online world. I’m not quite at the point where I’m arguing for an internet driver’s license. But you know how people get driver’s ed classes when they’re in high school, maybe we should start Internet ed classes even back in elementary school,” said Cerf.
Back to Shakespeare and exploring language.
“Michael was interested in understanding the role that words have in the Shakespeare plays, and [it turns out] their frequency of use in the plays can tell us a little bit about the various genres of the plays through tragedies, histories and comedies. Now this was for me, anyway, totally unexpected,” said Cerf.
“The system just looked for the frequency of the use of the word if, and, and but in this case. [Michael] did that for all of the plays of Shakespeare, and then tried to represent where they showed up in this multi-dimensional space [and] their relationships to all the words of the plays.”
Cerf took a stab at explaining why and was more prominent in histories while if and but were more frequent in tragedies and comedies.
“I want you to think for a minute about a Shakespeare play, and remember, the Globe Theater, people would go and they’d stand in the middle of the theater and watch what was happening on the stage. The stages were fairly small, this is bigger than the Globe Theatre’s stage,” said Cerf. “So, imagine that it’s a history play and you have to convince the audience that they are seeing the vast landscape wars and battles are happening. In order for him [Shakespeare] to paint with words, what was going on in the scene, you had to say, “and this forest is here, and this building is there, and that army was over here.” That was needed to paint a big landscape for people who were either reading or watching the plays. So that’s how the word and shows up in such proliferation.”
Conditional conjunctions, argued Cerf, are necessary language elements in presenting comedy and tragedy
“If and but turned out to be conditional, and they’re trying to juxtapose situations against each other, and often the comedies are based on this sort of collision of ‘conditionalities.’ You know, if this were only true, then that would happen. Or “if you but love me, then you know, the world would be better.” So, there are a whole series of relationships that get generated in the comedies, using if and but as sort of the fulcrum around which to develop the play’s plot. I never would have imagined that if it weren’t for the fact that he [Whitmore] showed this relationship in that 3D space.”
But, as is often the case in Shakespeare’s works, things can go amiss.
“You’re feeling pretty good about your theory. You do the experiment, and you get bang-on everything is exactly right, except for this point over here. Now, there are only two kinds of scientists, right? There’s one that looks at that point and says it’s measurement error and ignores it. But then there’s the other scientist who says, Huh, that’s funny, and then goes off to try to figure out what that point is doing there. And that’s the one that gets a Nobel Prize. So, I hope you’re that kind. In any case, here’s this wonderful theory, we separated all the plays with if, and, and but except that Othello shows up in the comedy space based on that metric. [It] has the structure of a comedy but those of you remember reading it, in high school maybe, will know that it is anything but that.”
“So the theory isn’t perfect. I actually don’t have an answer for you about why this structure in Othello, which says it should have been a comedy is, in fact, a tragedy. But the fact is that the theory isn’t perfect, but it’s amazingly effective,” said Cerf. That sounds like AI writ large, which was (I think) Cerf’s point.
Cerf then switched gears, moving from Shakespeare and clustering by a few simple words to using semantic relationships as a powerful but also imperfect tool with which ML can tackle language problems.
“The interesting thing about word relationships is that words occupy a very high dimensional space, and the semantic meaning of words is very high in dimensionality. One interesting question is whether languages, different languages, can be mapped into a semantic space and the relationship among the languages exhibited by that mapping,” said Cerf. “This is probably obvious to many of you, but when the machine is doing this kind of machine learning and mapping, it’s not doing linguistics and parsing of sentences and things like that. It’s really trying to associate words with their semantic meaning, and their relationship to other words.”
“You’ve heard of these things called generative adversarial networks. You take an image, and you train the system to recognize the image, it’s a cat. You and I would look at it; it looks like a cat. It’s got little, you know, triangular ears, it’s got a big furry tail,” said Cerf. “And the generative adversarial system goes in and tries to find a few pixels to change in the image. Now we look at the image, and it looks exactly the same as it did before because only a few pixels change, [but] the system tells you it’s a firetruck. Your reaction to this is sort of WTF.”
“The answer to [why this happened] is that the way the system was trained, it took all of the things that were supposed to be catlike and separated those from things that were not cat with hyperplanes in a high dimensional space. Fiddling with the pixels might actually cause a point in the space, based on the machine learning model, to move across one of the hyperplanes into some other recognizable space, in this case, maybe a firetruck,” he said “This system is not seeing the same way we do, but it has the interesting ability of seeing relationships that we might not be able to recognize,” he said.
Cerf cited an example of looking at three different languages and clustering things in those languages that are semantically similar. “The thing that was really quite fascinating is that you can see over here that in English, Korean and Japanese, the same set of words are the same semantically similar words [and] cluster in the same semantic space, even though the words themselves are not the same,” he said.
“The idea that we can do this means that when we’re doing language translation, and if we have enough content to train the system, it’s possible to get the system to translate from one language to another because of the semantic similarity that has been mapped into these spaces. Google now makes use of that machine learning capability in order to translate over 100 languages into each other. I have to say the quality of the translations will vary depending on how much training material was available of purportedly identical documents written in different languages that should, in theory, have the same semantic content,” he said.
A theme running through Cerf’s talk was the tremendous power of computational ML tools is also somewhat brittle. These approaches are powerful and can answer many questions perform many tasks that are difficult. But statistics underlie virtually all of machine learning’s abilities and choices and actions, not true learning. Still, he didn’t think this derivative, backward-looking approach meant such systems couldn’t innovate. He showed some lovely derivative artwork resembling Van Gogh’s painting style. He cited Deep Mind’s success at beating Go masters and learning chess on its own as well.
He also singled out progress in computationally solving 3D protein structure. “At DeepMind, they were very interested in the protein folding problem. Some of you may have some experience with that. They recently announced that, I think I have this right, but there’s something like 88,000 proteins that are generated by human DNA, and they’ve done the folding for that, [with] maybe 95 percent or more of the folding calculated by the machine learning algorithms,” he said, but added that much of the model training came from public project with “people who were playing around trying to figure out what the folding could be.”
It was a fascinating talk. Cerf presented a sort of three quarters glass-full/ one quarter empty perspective that emphasized the power of the computational tools but with limits and risks, the latter more around excessive dependence than anything else.
“I’m a little nervous about the idea that the machine may read a lot of books and tell us how to live. Because it feels like we shouldn’t be doing what the machine tells us to do. Some of you, however, might have heard of a novella that was written in 1909. It’s called The Machine Stops. It was written by E.M Forster. And I would urge you to read. It begins with a society that lives at home. All the food is delivered to the people at home. They interact with each other remotely somehow. He doesn’t say how. Boy, does this ever sound like everybody online on Zoom. And then one day the machine stops working. And the question is what happens to that society,” said Cerf.
Don’t panic yet. What computers can actually understand today is limited (he would probably say none at all). To make the point, he presented an interaction he had with a Google chatbot, Tina:
“Tina [is] a bot that has absorbed billions of words of content. I tried to have a conversation with this little bot. I was trying to figure out if it could learn anything, and so I wanted Tina to put little emojis at the end of its responses to me. I said, “Can you put this emoji at the end? It said “No problem I can do that. It shouldn’t be a problem at all.” And then it doesn’t do it. So, then I complain about it and it says, “Well, I missed out on sending you that. It won’t happen again, at least I don’t think it will. Sorry for the mishap.”
“Now, what’s going on here is that the computer is absorbing my text and it’s responding to the text based on all the examples it has of human discourse. But it hasn’t got a clue what’s actually going on. There’s no real knowledge being transferred,” he said. “I’m sitting here. We’re having this interaction and it’s not succeeding very well, but the interactions sound plausible. Here it’s apologizing all over the place. So apparently it learned all about apologetics. I mean, if you were looking for a way to apologize for failing to do something, this bot might be very helpful. Well, of course, my point here was that if it actually did learn how to respond, it would mean that it actually is understanding what it was I was saying. And, of course, it didn’t.”
How much technology is too much? During Q&A, Cerf was asked, “So the same way that we have parks to preserve nature, what aspects of life should remain unspoiled by technology?”
Cerf responded, “Boy, you’d have to go a long ways away to avoid technology. Of course, it’s a vaguely pejorative question. The assumption is that the technology spoils life. I want to refute that argument and say that technology has made our lives easier in many respects. To the first order, I think technology has offered us enormous benefits. Look how some of our survival during the pandemic has been dependent very much on technology, whether it’s the creation of the vaccines or our ability to interact remotely. On the other hand, I’d be the first guy to admit that sometimes technology is its own disease and we need to keep that in mind.”
Vinton G. Cerf is vice president and Chief Internet Evangelist for Google. He contributes to global policy development and continued spread of the Internet. Widely known as one of the “Fathers of the Internet,” Cerf is the co-designer of the TCP/IP protocols and the architecture of the Internet. He has served in executive positions at MCI, the Corporation for National Research Initiatives and the Defense Advanced Research Projects Agency and on the faculty of Stanford University.
Vint Cerf served as chairman of the board of the Internet Corporation for Assigned Names and Numbers (ICANN) from 2000-2007 and has been a Visiting Scientist at the Jet Propulsion Laboratory since 1998. Cerf served as founding president of the Internet Society (ISOC) from 1992-1995. Cerf is a Foreign Member of the British Royal Society and Swedish Academy of Engineering, and Fellow of IEEE, ACM, and American Association for the Advancement of Science, the American Academy of Arts and Sciences, the International Engineering Consortium, the Computer History Museum, the British Computer Society, the Worshipful Company of Information Technologists, the Worshipful Company of Stationers and a member of the National Academy of Engineering. He currently serves as Past President of the Association for Computing Machinery, chairman of the American Registry for Internet Numbers (ARIN) and completed a term as Chairman of the Visiting Committee on Advanced Technology for the US National Institute of Standards and Technology. President Obama appointed him to the National Science Board in 2012.
Cerf is a recipient of numerous awards and commendations in connection with his work on the internet, including the U.S. Presidential Medal of Freedom, U.S. National Medal of Technology, the Queen Elizabeth Prize for Engineering, the Prince of Asturias Award, the Tunisian National Medal of Science, the Japan Prize, the Charles Stark Draper award, the ACM Turing Award, Officer of the Legion d’Honneur and 29 honorary degrees. In December 1994, People magazine identified Cerf as one of that year’s “25 Most Intriguing People.”