Avatars and Grid 2.0

By By Tom Gibbs, Contributing Author

July 10, 2006

Talk about virtualization … Organizing a distributed array of a few thousand servers to run Monte Carlo simulations of the Black-Scholes algorithm across multiple time zones is one thing. Creating a complete virtual society over the Internet, where each of the players develops their own characters and sub- cultures, is another — and it's happening every day around the world.

Playing, or rather engaging, in virtual worlds in Massive Multiplayer Online Role Playing Games (MMORPG) is a global phenomenon that's been brewing for more than 30 years, and breakthroughs in computer technology, access to broadband and the emergent, immersive approach to development that is coming along with Web 2.0 is driving it to heights that would have been hard to imagine even a few years ago. The most popular game of this genre — “World of Warcraft” — boasts more than 6 million subscribers, and one analyst firm estimates that MMORPG gamers spent $1 billion last year on “assets” in their virtual world — and this is expected to grow by 50 percent in 2006. One billion of anything is something to take notice of.

What does this have to do with Grid computing and communications? Everything, if you believe that consumer usage — Grid 2.0 — will take the original work in computer virtualization, which was motivated by scientific simulation and engineering and evolved into more generic resource sharing for business usage, to drive the next level of technical maturity in distributed computing.

In a recent BusinessWeek article describing MMORPG, there was a glossary of terms for the uninitiated. “Grid” was the second item in the list, and was described as the “collection of servers that run a virtual world.” When the immersive gamers sense the online game is balking, they claim “the grid has gone down.” If you dig a little deeper into this phenomenon, it becomes clear that distributed Grid computing is a critical cornerstone of this genre as it evolves.

In Wikipedia, which is my favorite Web 2.0 site, there is a very healthy entry on MMORPG, which includes this blurb: “Peer-to-peer MMORPGs could theoretically work cheaply and efficiently in regulating server load, but practical issues such as asymmetrical network bandwidth and CPU-hungry rendering engines make them a difficult proposition.”

Who knows who added the piece about peer-to-peer (P2P) to Wikipedia — perhaps one of our GRIDtoday readers — but the fact is that P2P computing methodology is already at the heart of many online games today, and the advances in Grid computing and communications are aimed right at the issues of asymmetry and CPU- hungry applications. The folks in the financial services industry who do risk analysis are very keen on Monte Carlo simulations using the Black-Scholes (sic Shoals, see below) algorithm. They have similar problems with asymmetry and CPU- hungry applications, and while their application is clearly more important than MMORPG, the number of individuals who actually use the application is severely constrained.

Let's face it. The average consumer thinks that Monte Carlo is a place where princesses primp and James Bond visits when he gets a hankering for roulette and improperly made Martinis. These same folks associate Black “Shoals” with tar- filled beachfront on the “FloriBama” panhandle, where Jimmy Buffet hangs out slippin' on pop tops and sippin' on that frozen concoction that helps him and the rest of humanity hang on.

Consumers rule — they always have. But in the current economy, they rule absolutely. Say's Law of Markets claims that aggregate supply creates aggregate demand, and while this theorem has been argued on both sides for more than a century, it is inarguable that (even potential) aggregate demand drives venture investment. It is my belief that consumer usage will drive the level of investment needed to take virtualization and distributed Grid computing and communications to the next level. If the only usage was enterprise IT and scientific simulation, I'm convinced that steady investment in selected niche companies would drive steady evolution. However, a major push on multiple fronts with consumer usage could drive a breakthrough. It will be important as the Open Grid Forum evolves that it consider the consumer usage models that are emerging with Grid 2.0 to capitalize on the potential windfall.

For a look at MMORPG, we don't have to delve too far back into history — as the genre began in the mid-1970s with the game “Dungeons and Dragons” (D&D). Now, truth be told, I never got into D&D. I had friends who were totally hooked, but D&D was introduced to the world at about the same time the world was introducing me to beer and women (B&W). I found B&W — the reality game — much more intriguing that D&D — the virtual one — and could never spend the time or energy to get into it. Nonetheless, I followed the genre from afar and was amused along with my friends as the first computer implementations came on the scene in the early 1980s. It was the stuff on which only a geek could thrive, so of course I was interested. As PCs and game players got more powerful, and workstation quality graphics became affordable (a system for under $1,000), the games improved dramatically and users went from obscure to omnipresent (or at least a lot, who plan to spend $1 billion this year).

Most of the big-selling or highly used games, like “World of Warcraft” (mentioned above) and Electronic Arts' “Sim2,” are distributed along the lines of standard video games. A CD or DVD is used to load the game on a local system and the Web is used for interaction. The environments and rules are built into the game. In a newer form of the genre, a Web 2.0 approach is used to deliver the medium, like with “Second Life,” which is developed by Linden Labs. Here, the users help develop elements of the game and the virtual world effectively comes alive. The bulk of the software resides on servers and a local patch runs the rest along the lines of GoogleEarth.

“Second Life” is interesting as it applies to Grid and open source technology because major portions of the game are developed by the players, and the nature of the players, as well as the servers that deliver the game, are both virtual and distributed. In the game, the players develop their own avatars (or virtual person) using tools provided by Linden Labs. There are rules, and Linden Labs even provides game money that can be converted into real money. So some players develop and sell real estate, where a few gamers have become “moguls.” Bingo was a popular game among the virtual avatars, so one of the players developed a derivative called Tringo. The impact of this free-form development model and the conversion of virtual into real assets is causing a number of serious parties to get interested.

In May, BBC rented a virtual island in “Second Life,” where it can stage online music festivals and throw exclusive celebrity parties. In a press release, they claimed that the virtual party will mirror BBC Radio 1's real-world, One Big Weekend event, which was held May 12-14. Radio 1 plans to use the island to debut new bands over the next year. The good news for guys like Dallas Austin — an American songwriter and record producer who was arrested for cocaine posession on a recent trip to Dubai, United Arab Emirates — is that there are no customs authorities in second life yet, so he might avoid arrest when he attends the virtual celebrity gigs.

The Harvard Business Review had a piece on the use of the genre for marketing purposes to test new concepts with “real live” virtual participants. Economists are getting very interested — talk about a virtual sandbox for game theorists! Psychologists have rather obvious interests as well. Essentially, all the buzz around agent-based models could get very interesting. The trick to date has been the reality of the environmental rules of the game. In the virtual world, the grist of human action is about as close to real as it gets.

So this is more than just fun and games, and it all runs on a grid. The more powerful the grid in terms of effective bandwidth, compute power and storage, the more real the game. It's already pretty powerful stuff. I read one user comment that if she didn't have to pee, eat and have sex, she'd simply live the virtual world. If the analysts are even close to correct — and nearly 10 million users will invest over $1 billion in MMORPGs and even a small fraction of that runs on a distributed virtual Grid — it's a usage model that must be taken seriously.

To add some meat on the bone, the folks at Linden Labs were kind enough to give me a look under the hood. “Second Life,” where more than 300,000 players develop their own identities and add elements to the game as they go, uses nearly 1,000 dual processor/dual- core systems. So at 4,000 processors, this is a pretty intensive system. Even in the start-up phase it eclipses many enterprise and scientific systems. Heck, a quick peek at the Top500 list suggests that this system would clock in at about No. 100 if they chose to run the Linpack benchmark.

And they're not standing still. The “Second Life” world is growing at about 15 percent per month, and the servers at a little less than half that rate. The way the game works the server load is based on real estate, so the game grows as users build out and use more land, where each server provides the game space for roughly 64 acres. John Locke, the famous English philosopher who gave the United States' founding fathers the essence of the Declaration of Independence, would be so proud. The utilitarian value of property is being demonstrated in a very interesting way — with real money and pretend people doing the “work.”

The software for “Second Life” uses a moderate client load of about 25MB, which is updated every week or two. The system image is a few hundred megabytes and is updated daily, where the server load is about 100MB and updated every other week on roughly the same cycle as the client side. It's not exactly a pure Web 2.0 play, but who cares?! “Systemimager” and “cvsup” are the core tools, where they also use standard tools like “dsh” and “rsync,” as well as peripheral stuff like “nagios,” “ganglia”, “netdisco,” “cricket,” etc. Their software includes an internal system for doing the low-level resource allocation and fault- tolerance. New servers go into a pool of available CPUs, and as machines fail, the territory they were simulating is picked up by spares in this pool. When old servers are discarded, they simply shut them down and the load moves elsewhere.

Cutting-edge? Yes. Bleeding-edge? No. Sounds like a lot of scientific and enterprise systems I've run across recently, as a ton of capability can be delivered with much less risk than a few years ago.

MMORPG is one of many new consumer usages for Grid computing. It, along with others, will attract serious money and use serious computing infrastructure, even if the primary usage is entertainment. Why is this so important? It's my hypothesis that Grid computing will follow a path similar to the Internet: A technology initiated by DARPA that is then nurtured by the National Science Foundation followed by the international science community, which is then gradually adopted by business, and then explodes with consumer usage. The explosion is what I'm interested in, and the fuse looks to be sparking right now — even if it's an avatar holding the butane lighter in its virtual hand.

About Tom Gibbs

Tom Gibbs is director of worldwide strategy and planning in the sales and marketing group at Intel Corp. He is responsible for developing global industry marketing strategies, building cooperative market development, and marketing campaigns with Intel's partners worldwide. Gibbs joined Intel in 1991 in the Scalable Systems division as a sales segment manager. He then worked in Intel's Enterprise Server group, where he was responsible for business growth with all OEM customers with products that scaled greater than 4-way. Finally, just prior to joining the Solutions Market Development group, he was in the Workstation Products group — responsible for all board and system product development and sales. Prior to Intel, Gibbs held technical marketing management and industry sales management positions with FPS Computing, and engineering design and development for airborne radar systems at Hughes Aircraft Company. He is a graduate in electrical engineering from California Polytechnic University in San Luis Obispo and was a member of the graduate fellowship program at Hughes Aircraft Company, where his areas of study included non-linear control systems, artificial intelligence and stochastic processes. He also previously served on the President's Information Technology Advisory Council for open source computing.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

Nvidia Appoints Andy Grant as EMEA Director of Supercomputing, Higher Education, and AI

March 22, 2024

Nvidia recently appointed Andy Grant as Director, Supercomputing, Higher Education, and AI for Europe, the Middle East, and Africa (EMEA). With over 25 years of high-performance computing (HPC) experience, Grant brings a Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire