From Brazil, with Love: A Look at OurGrid

By By Derrick Harris, Editor

September 19, 2005

GRIDtoday editor Derrick Harris recently spoke with Walfredo Cirne, director of the Distributed Systems Lab at the Universidade Federal de Campina Grande in Brazil, about OurGrid — a project he leads (and to which HP has contributed a fair amount of resources) that has become one of the largest computational Grids in Brazil. Cirne will be presenting at the next Gelato Federation meeting, which will take place Oct. 2-5 in Porto Alegre, Brazil.

GRIDtoday: Tell me about OurGrid. Can you give me a brief background on the project and what it hopes to accomplish?

WALFREDO CIRNE: OurGrid is an open, free-to-join Grid. Unlike traditional Grids, joining OurGrid is automatic. No paperwork or approvals of any sort are required. Someone wanting to join OurGrid just downloads the software from www.ourgrid.org and installs it. OurGrid forms a peer-to-peer Grid in which peers donate their idle computational resources in exchange for accessing other peers' idle resources when needed. The vision is that OurGrid provides a massive worldwide compute platform on which research labs can trade their spare compute power for the benefit of all.

I often think about OurGrid as a digital inclusion project. It caters to the small research labs spread throughout the world. These labs increasingly demand more and more computational power, just as large research labs and high-visibility projects. However, they seldom have the resources to afford traditional HPC solutions and the specialized computer personnel to use it.

Gt: OurGrid is focused on running “Bag-of-Tasks” applications. How do these applications vary from other types of applications that might run on a Grid?

CIRNE: We use the term Bag-of-Tasks to refer to parallel applications whose tasks are independent. OurGrid began as a solution to a particular problem: the execution of hundreds of thousands simulations. The simulations were independent and then extracting parallelism from them was very straightforward. Such independence also simplifies running the application on the Grid because it greatly reduces coordination problems. This initial solution was very minimalist and only supported a single user. But it worked so well, it was so much simpler than traditional Grids, and Bag-of-Tasks applications are useful in so many contexts, that we decided to keep the focus on this kind of application as the system evolved in a worldwide compute platform.

In fact, we are at a point in the project that we just finished up the support for worldwide resource sharing among unknown entities to run Bag-of-Tasks applications. We now are evolving OurGrid to support communicating parallel applications. This will be available in OurGrid 4.0, which should be released in November. The challenge is to add this support for communicating applications while keeping OurGrid simple and safe.

Gt: Can you speak a little about the importance of being “fast, simple, scalable and secure”?

CIRNE: These are the main requirements for OurGrid to succeed. Clearly, OurGrid must be fast, (i.e., using the Grid must produce results much faster than not using it). It must also be simple because our target public is the small research lab. These people just don't have the time and resources for arcane computer solutions. The closest to “plug-and-play,” the better. OurGrid must be scalable because the small research labs that need substantial compute power number in thousands. Note also that scalability is not just a technical issue, it also has administrative ramifications. Finally, OurGrid must be secure, because otherwise no one is going to use it.

Gt: What are some of the unique projects running on OurGrid?

CIRNE: There are a number of projects using OurGrid, from molecular dynamics and simulations and climate forecast to drug discovery and hydrological management and image processing. For instance, we make intensive use of OurGrid to run simulations that help us refine and extend OurGrid itself.

A particularly interesting project based on OurGrid is SegHidro, a project that aims to improve the decision-making process for the water reservoirs of the Brazilian Northeast. The Brazilian Northeast (which is the region where I live) has very irregular rainfall. We have severe droughts, but also floods. As such, a good management of our water reservoirs is critical to minimize the effects of droughts, as well as to prevent floods. SegHidro combines a number of climate and water forecast models together with reservoirs models to provide decision makers with the risk analysis of their decisions. In fact, different research labs of the region contributed with different pieces of the whole model. It is very nice to see our efforts helping to improve people's lives, especially those of the poor people who suffer the most from our droughts and floods.

Gt: I understand that OurGrid was developed in collaboration with Hewlett-Packard. How was it working with them and what role did they play in the development?

CIRNE: HP has been key to OurGrid's existence. They have helped the project in many ways. HP has funded most of the research behind OurGrid, and also a considerable part of its development. They have closely followed the project evolution, providing important feedback and helping us establish priorities. Furthermore, they've been directly involved in research that made OurGrid possible. The Network of Favors (the peer-to-peer resource sharing protocol that is OurGrid's heart) was jointly developed by us, at UFCG, and HP Labs in Bristol (United Kingdom). SWAN (OurGrid's security mechanism) was conceived and implemented by HP Brazil R&D Labs, in Porto Alegre.

Gt: Right now, OurGrid consists of more than 500 machines. How many machines would you like to see make up the OurGrid network?

CIRNE: We have a rough estimate that there are about 10,000 research labs around the world that could benefit from OurGrid. Looking at the labs that have already joined the community, each averages 15 computers per lab. Therefore, our target is to reach 150,000 machines.

Gt: Although Grids are hardware and operating system agnostic, your institution is also a member of the Gelato Federation, which promotes Linux on Itanium. Why is it important for you?

CIRNE: Being a Gelato member reinforces our networking abilities, opening up opportunities for collaboration. Moreover, Gelato is such a great source for technical information.

Gt: What makes Linux on Itanium such an effective platform for High Performance Computing?

CIRNE: Historically, better performance has been obtained from increasing processor clock speed. However, turning up clock speed is starting to reach limits of physics and manufacturability. In fact, clock speed increase has already slowed down. Better performance will no longer be driven by having more CPU cycles per unit time, but by performing more instructions per clock cycle. And that is where Itanium comes in. Being clever about how to achieve more per clock cycle is what Itanium is all about.

As for why Linux on Itanium, I think it is just the natural choice. The whole HPC industry has long been UNIX-like based. In recent years, Beowulf clusters have made it strongly Linux based. It's now time we start to see more and more of these clusters being built around Itanium. By combining open-source Linux with multi-vendor Itanium system offerings, one can expect to have a very cost-effective solution.

Gt: How do you see Gelato members in Latin America, in particular, contributing to science and technology development in the region?

CIRNE: The whole Itanium technology is pretty cutting-edge (some may even say ahead of its time). Thus, it is very important for us, research labs in Latin America, to get involved in the process of refining it.

Gt: What is your personal history with Grid computing and HPC in general?

CIRNE: I became involved with Grids and HPC in 1997, during my Ph.D. My advisor, Fran Berman from the University of California, San Diego (UCSD), was one of the people who “created the area” in the mid-1990s. Naturally, I worked with Grids during my thesis.

In fact, OurGrid is somewhat of a byproduct of my Ph.D. thesis. As I mentioned, OurGrid stemmed from the need to run tons of simulations. This is because, during my thesis, I got unlucky with some workload distributions, which were very “elastic.” As a result, I ended up having to run my simulator hundreds of thousands times to get statistically valid results. The simulations were independent, making them a suitable application to the Grid. Thus, I went to try to run my simulations on the Grid. However, even in a top Grid lab, it was very hard to get a Grid running in production in the late 1990s. All we had were testbeds. So, I leveraged the simplicity of task independence, and created MyGrid, a series of scripts that allowed me to combine all machines I could login via ssh into a “personal Grid.” With the help of some friends (promising I would always run “niced”), I got access to six labs; 178 processors total. This was enough for me to run my simulations 116 times faster than using only my desktop computer.

I later realized that MyGrid would be useful to many people. So, with the help of HP and the energy of many students, we've built a product version of MyGrid, with a manual and everything. It allows people to combine all the machines they have access to into a personal Grid able to run Bag-of-Tasks applications.

OurGrid followed, enabling people to automatically and safely exchange resources from their MyGrids. The idea is that they'll be able to run even on computers they don't have access to, making their “personal Grid” scale much further than the remote logins they can manage to talk their friends into providing to them.

Gt: What are some of the big changes you have seen with Grid computing since you got involved, and what do you think will be the biggest changes we will see in the near future?

CIRNE: Grids started as one form of HPC. The idea was to combine resources from multiple administrative domains to obtain unprecedented levels of parallelism. For me, the biggest change was the evolution of this vision into the ability to access and combine computational services on demand. This takes Grids from a niche (HPC) and puts them into the computing mainstream. It is a vision that makes a lot of sense (both technically and economically). However, I'm afraid we are building a far too complex infrastructure. Last time I checked, there were 52 “emerging” Grid standards. It is just too complex. I expect to see much simplification before we can realize the vision of service-oriented Grids.

Gt: Is there anything else you'd like to add?

CIRNE: If you have a Bag-of-Tasks application, go to www.ourgrid.org, and give OurGrid a try. The system is for real, and we'd love to have you in the community.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire