The Power Behind Google

By Michael Feldman

January 6, 2006

“Few Web services require as much computation per request as search engines. On average, a single query on Google reads hundreds of megabytes of data and consumes tens of billions of CPU cycles. Supporting a peak request stream of thousands of queries per second requires an infrastructure comparable in size to that of the largest supercomputer installations.” So begins the description of the computational power required for Google's Web search engine in “Web Search For A Planet: The Google Cluster Architecture”, a publication of the IEEE Computer Society.

Though we most often associate high performance computing with high-end mathematical applications such as climate modeling, fusion reaction simulations or quantum chromodynamics, one of the most common forms of data intensive computing is Web searching. It is used by millions of people worldwide on a daily basis. Today, Web searching is so ubiquitous that most of us now take it for granted that we can find the answer to just about anything on the Internet. But for the average person, it's hard to imagine the computational resources required to search and analyze petabytes of data millions of times per day.

So what is the nature of Google's computing infrastructure? How is it able to process thousands of queries per second from all over the world? And how will its infrastructure scale as the Web continues to grow? HPCwire recently spoke with Jeffrey Dean, Google Fellow in the Systems Infrastructure Group, to get the answers to these and other questions.

HPCwire: Tell us a little bit about your background and why you came to Google.

Dean: I received a B.S. in computer science and economics from the University of Minnesota. I then worked for a year and a half for the World Health Organization's Global Programme on AIDS, developing software for modeling the impact of the AIDS pandemic, before going to grad school. I received a Ph.D. in computer science from the University of Washington, doing research in high-level compiler optimizations for object-oriented languages. After graduating, I went to work for Digital Equipment Corporation's Western Research Lab (DEC WRL), where I worked on a variety of projects, including low-overhead profiling systems, some microprocessor architecture work, and some work on web-based information retrieval.

I joined Google in mid-1999 because my work at DEC WRL on information retrieval whet my appetite for working in that area. I knew a couple of people at Google, and figured it would be a fun place to work. I've always enjoyed working on problems that span a pretty wide range of computer science disciplines, and working on large-scale search engines is one of the best ways of doing that, because it requires solving problems across a really broad range of topics, including low-level system design, distributed systems, data compression, information retrieval, machine learning, user interfaces, with lots of general algorithmic problems thrown in at every turn.

HPCwire: Could you briefly describe the Google computing infrastructure and its rationale?

Dean: When designing our computing clusters, we place a great deal of emphasis on what sort of systems will give us the best price/performance. Search applications are relatively easy to parallelize, both within processing of a single query (by partitioning the index across machines), and across queries (by replicating each piece of the index across multiple machines and having each replica serve a fraction of the total traffic). Given this easy parallelism, the price/performance argument leads towards using clusters of large numbers of commodity PCs, that is, x86 processors, inexpensive hard drives, commodity Ethernet networking, etc.

Our clusters typically are composed of several thousands of these commodity machines, all connected via commodity Ethernet. The individual machines typically have gigabit NICs, and groups of around 40 machines are connected to commodity gigabit Ethernet switches. These switches are then connected into a large-scale core switch for the cluster, using a small number of Gigabit connections per group of 40 machines. So, our bisection bandwidth is considerably less than a gigabit per machine. More bandwidth would be great, but it's obviously considerably more expensive, and our current configuration is the current sweet spot given our applications.

We don't disclose the exact number of clusters that we have, but we do have many of them around the world at various locations. There are a couple of reasons for this. First, we can given our users faster response times by using software that tries to direct user queries towards a cluster that is located nearby to that user in terms of network latency. Second, it makes our search service considerably more robust. We try to have a bit of extra capacity at all times, so that we can turn off clusters at various points, either for planned events like hardware or network upgrades, or because we need to quickly turn off serving from a particular cluster — for example if one of the core network switches fails.

Having lots of relatively small machines means that you get a lot more bang for the computing dollar, but it also means that the machines are less reliable than more expensive machines, and because there are so many of them, higher-level software has to be designed to tolerate failures — with thousands of machines, machine failures happen many times per day. Our software is designed to assume that the hardware can fail.  Once you do that, it becomes fairly simple to deal with a lot of failures. Our serving systems generally have multiple replicas for each piece of the system to provide fault tolerance to individual machine failures.

We've also designed our own file system, the Google File System (GFS), to reliably store large amounts of data on large clusters of machines. [For information on GFS visit http://labs.google.com/papers/gfs.html.]

Finally, when you're doing large-scale data processing, it's important not to separate the storage from where you're going to do the computation. You don't need really high-end storage arrays with massive amounts of bandwidth to process a large amount of data. If you do the scheduling right, you can read the data from local disks on thousands of machines, simultaneously. By doing this, you can attain really good bandwidth from low-end storage systems with slightly clever software.  So rather than moving the data to the machine, we try to move the computation to the data.

HPCwire: What kinds of new infrastructure is Google looking at, in the near-term, to improve its price/performance?

Dean: Given the fact that our applications can be easily parallelized, chip multiprocessors (CMPs) look very attractive to us, compared with processors that go to great lengths to extract the highest single-thread performance possible.

My colleague, Luiz Barroso, has written up a nice article describing why CMP processors look very attractive for our applications [see http://labs.google.com/papers/priceofperformance.html].

As always, we're continually evaluating and refining our hardware infrastructure to explore which solutions provide the most attractive price/performance for our applications, but we are excited about the initial CMP processors coming out, as we feel their emphasis on high throughput for parallel applications rather than single-thread performance is a good match for our applications.

HPCwire: Could you describe the MapReduce model and implementation and how it is being used within Google?

Dean: MapReduce is a system originally developed by myself and my colleague, Sanjay Ghemawat, as a way of describing computations that want to process input data to compute some derived data. The general programming model is to break the computation down into two distinct phases: a Map phase, and a Reduce phase. Users specify a Map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a Reduce function that merges all intermediate values associated with the same intermediate key. The basic idea is similar to the Map and Reduce primitives found in LISP and many other functional languages.

What makes it interesting is that we've developed a MapReduce library that is able to take programs written in this style and make them run on clusters of hundreds or thousands of machines. The underlying library takes care of lots of the messy details that arise when running very large-scale parallel computations, including automatically parallelizing the computation, deciding which machines should work on which pieces (including biasing those scheduling decisions to consider data locality), and handling what happens when machines fail (with long-running jobs that execute on thousands of machines, machines failures happen with some regularity). It also does things like scheduling multiple copies of the same pieces of work towards the end of the computation, to minimize the job completion time (whichever copy finishes first “wins”), and collects progress information on a centralized status page to make it easy to monitor a MapReduce computation.

We've been pleasantly surprised at how applicable the general MapReduce model has been to a wide variety of problems: it's being used internally at Google in areas as diverse as our core crawling and indexing system, data mining, statistical machine translation, our advertising systems, processing of satellite imagery, etc. It makes it relatively easy for people within Google to write relatively simple code and have that code run reasonably efficiently on thousands of machines. In a typical day at Google, thousands of different MapReduce computations are run with hundreds of distinct Map and Reduce functions across our various computational clusters.

HPCwire: Is MapReduce something you would make publicly available?

Dean: We don't currently make it available. We've had thoughts about it. It would be a moderate amount of effort on our part to divorce it from other pieces of our software, such as our cluster scheduling system. Also, we put a fair amount of effort into it and it's not clear that we would want to make it available to our competitors. At the same time, we feel like there are a lot of academic projects that would benefit from having access to something like this. So, in the future, I wouldn't be surprised if we did make it available in some form. [For more information about MapReduce visit http://labs.google.com/papers/mapreduce.html.]

HPCwire: As the capacity of the Web grows, how is the Google infrastructure going to change? Is the current model scalable for the foreseeable future? And besides Web growth, what other kinds of changes do you foresee that will create infrastructure challenges for Google?

Dean: Our software infrastructure undergoes fairly rapid evolution in response to changes in the underlying hardware platform and also our desire to scale our system in a variety of dimensions. For example, for Web search, the major dimensions are the number of documents searched, the number of user queries we need to handle, and the speed with which the index is updated. We're also usually looking to introduce new capabilities in our infrastructure, for example, the ability to examine more information about documents when making ranking decisions, the ability to quickly try out new ideas for improving our ranking algorithms, etc. In designing a system, one tries to anticipate scaling in these various dimensions, but a given design really works well only when the design parameters are within one or two orders of magnitude of the original design goals. Beyond that, the level of scaling can change the design; solutions that weren't feasible originally suddenly become very attractive, and this often leads to significant redesigns of pieces of the system. As a consequence of this, in the six and a half years that I've been at Google, our query serving [software] infrastructure has undergone fairly radical changes at least five times. I expect this to continue to be true in the future.

In terms of general infrastructure, we place quite a bit of emphasis on developing tools and infrastructure to make it easier to develop new and interesting products. GFS and MapReduce are a couple of examples. We also have a number of internal tools that help with understanding performance bottlenecks in large-scale distributed systems.

Many of our newer products, like Gmail and Google Earth, have fairly different characteristics than Web searching, and it's important to have infrastructure and system building blocks that meet the needs of a diverse set of products that we want to offer, and to make it easy to develop new applications and services. One example is that we saw a need in many of our products to manage large amounts of semi-structured mutable data in interesting ways, and to help with that, we're developing BigTable, a large-scale distributed storage system for managing semi-structured data.

In the future, a combination of both new products and our goals for pushing our current products in new and interesting directions will guide our decisions about the right software tools and infrastructure to build. These are very exciting times to be working on large-scale systems and products at Google.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire