GRID PLUGTEST: INTEROPERABILITY ON THE GRID

By Nicole Hemsoth

January 31, 2005

GRID PLUGTEST: INTEROPERABILITY ON THE GRID

ETSI and INRIA organized a three-day Grid Plugtest, which started on Oct. 18. The objective was to learn, through user experience and open discussions, about the future features needed for the ProActive Grid middleware as well as to get important feedback on the deployment and interoperability of Grid applications based on the ProActive library, distributed across various Grid platforms. On the three days, the event drew 80 participants from 10 different countries: France, Chile, United States, England, Holland, Switzerland, Spain, Italy, Japan and Korea. All these people met to share their views of ProActive, the Grid middleware developed in the OASIS team, INRIA. This event was organized under the supervision of UNSA (University of Nice), I3S and CNRS, and was sponsored by IBM, Sun and ObjectWeb.

The event consisted of three different happenings. On the first day, ProActive talks were held. In the morning, the general features offered by the middleware were presented, with talks underlining its main aspects such as its programming model, group communications, mobility, Grid deployment capabilities, Grid component model and security. On the afternoon session, users were invited to speak about their use of the middleware. During the evening, future work was presented, and a panel of experts was invited to talk about actual problems in the Grid domain: “Stateful vs. Stateless Web Services for the Grid: how to get both scalability and interoperability?” with Denis Caromel (UNSA), Tony Kay (Sun Microsystems), Jean-Pierre Prost (IBM EMEA Grid Computing), Vladimir Getov (University of Westminster), Marco Danelutto (University of Pisa) and Christophe Ney (ObjectWeb).

Regarding ProActive, it is an LGPL Java library for parallel, distributed and concurrent computing, also featuring mobility and security in a uniform framework. With a reduced set of simple primitives, ProActive provides a comprehensive API allowing simplifying the programming and deployment of applications on Local Area Network (LAN), on clusters of workstations, or on Internet Grids. The deployment infrastructure based on XML files provides a level of abstraction that allows removing from the source code of the application any reference to software or hardware configuration. It provides an integrated mechanism to specify external processes that must be launched and the way to do it. The goal is to be able to deploy an application anywhere without having to change the source code, all the necessary information being stored in an XML descriptor file. ProActive features also a well-defined Grid component programming model.

The second day was dedicated to a contest between 6 teams: AlgoBar, Tournant and INRIA from France, University of Chile, NTU from Taiwan, and University of Southern California, where the aim was to find the number of solutions to the N-queens embarrassingly parallel problem: N being as big as possible, count the number of solutions for placing N non-threatening queens, in a limited amount of time. The world record is for N=24, having 227,514,171,973,736 solutions, calculated on 64 CPUs (Pentium4 Xeon 2.8 GHz in a FireCore cluster) with 75 516 tasks using MPI (standard parallel programming) in 22 days. Actually, the INRIA team equaled this world record in the offline challenge (qualifications for the real event), in 17 days on a P2P desktop Grid of more than 300 heterogeneous machines using the ProActive middleware.

To be able to run such contest, a Grid was built up, with the help of our different partners on 20 different sites, in 12 different countries. We gathered a total of 473 machines, bearing 800 processors, totaling 100 Gigaflops (measured with the SciMark 2.0 agent for computing, this measure was performed using a pure Java benchmark). One very important and interesting aspect was that resources used to build the Grid were heterogeneous in terms of OS (Linux, Windows XP, MacOS, SGI Irix and Solaris), access protocols (ssh, gsissh), Grid middleware (Globus), Job Scheduler (PBS, LSF, Sun Grid Engine, OAR, Prun), Security policy (Firewalls, NAT, Private IP addresses, …) detailed after, Java Virtual Machines (Sun, BEA, SGI). The deployment and interoperability between all resources/sites were achieved using ProActive.

Most of our concerns were about the different security policies we encountered on each sites during the set up. The challenge was about being able to access each site according to their security policy, in which we defined four levels of friendliness:

  • Friendly: sites allowed all incoming/outgoing connections from/to machines on the ETSI Plugtest network. In that configuration, no specific action needed.
  • Semi-friendly: sites allowed a range of ports to be open, in which case http was used as a communication protocol to gain access to the machines.
  • Semi-restrictive: sites allowed only ssh communications, we used ssh/RMI-tunneling feature to deploy jobs.
  • Restrictive: sites had a public IP address for the front-end and private IPs for the backend nodes. Users were constrained to use hierarchical deployment in their application.

We also added new features in ProActive to cope with some internal sites configurations (DNS missing, machines with two network interfaces, …)

All contestants were asked to use ProActive as their middleware, and could freely use the power of the 800-plus processors (during their slot of time: 1 hour) which were dispatched around the world (Australia, Europe, North and South America, India). This event was strictly an engineering event, not a conference, nor a workshop. As such, an active participation was requested from contestants who had to provide their own implementation of the N-Queens algorithm, and eventually modify existing xml deployment file to adapt to their strategy.

There was no compulsory programming language, but all teams used Java to write their code, except from the NTU team which hid some native routines inside a Java wrapper. This scheme led to a faster algorithm, but lost Java portability. The sites had to be updated with this native code, which would be hard to do on a bigger scale. On the other hand, the all-Java approach allowed for transparent migration of code to distant nodes, with no manual code exportation.

The criterion for deciding the winners were based on

  • the greatest number of solutions found.
  • the biggest number of processors used.
  • the fastest algorithm.

The Chilean team got ahead of the other five participants. They found the number of solutions for 18 queens, 19 queens twice, 20 queens four times and 21 queens once, in an hour. They were the best when considering the number of solutions found in one hour (800 billion), the number of nodes used (560), and the speed of the algorithm (21 queens in 24 minutes, 38 seconds).

The Plugtest, co-organized by INRIA and ETSI, pleased all the participants. It was an event both useful for the users, who received help from the ProActive team, and the OASIS team, who received feedback from the users. We were forced to add functionalities to the middleware to be ready and effective for the Plugtest, and have a stable system. We had to develop certain aspects that had been left out, due to time restrictions and priorities that were in fact of primary importance. We are also very satisfied with the results obtained during the N-queens contest, which showed that applications could take advantage of the Grid in a simple way. Another happy discovery was the number of different scientific domains which could use our middleware in their applications. This is a direct effect of the generic programming model used inside, which can be reused for biology, physics and evolving phenomenon.

We did have trouble setting up the Grid to work, as mentioned earlier but once this configuration was achieved, the work for the users was simple. Indeed, the deployment on the different sites was not a source of problems, which is an indicator of how ProActive is fit for usage, as users were not bothered by system configuration, and could instead focus on the internals of their application.

Pressed by the general demand, INRIA and ETSI will be organizing another Plugtest on Oct. 10-14, 2005. The event is planned to be larger, on all scales: we expect more people (more than 150), a longer time span (five days), a larger Grid, the use of other middleware, and an even wider panel of domains. This future event will involve several European projects. Indeed, two workshops will be held during these five days, a GridCoord workshop: Open middleware for the Grid, and a CoreGrid workshop: Programming Models and Components for the Grid. The application used for the contest and interoperability Plugtest is not yet fixed, but we have been thinking about a travel sales man problem, which needs many more communications, and will be even more interesting and demanding to supervise.

Resources:

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire