Univa, which became the sole purveyor of commercially supported Grid Engine a little over a year ago when the middleware vendor acquired the source code, copyrights, and trademarks associated with the software from Oracle, recently held a webinar to address how coprocessors like NVIDIA GPUs and Intel Xeon Phis are being integrated and used in a Univa Grid Engine compute cluster. In light of the September release of Grid Engine 8.2 and the still-growing popularity of accelerators and coprocessors for a range of workloads, it is timely for a look at how to best make use of these parts inside the Grid Engine environment.
The session was presented by Senior Software Engineer Daniel Gruber, who has contributed to the technology at Sun and Oracle and is now part of the Univa grid engine development team. Some of the specialities of this DRMAA2 working group member include core/memory binding, RSMAP, Intel Xeon Phi support, cgroups, Cray XC30 support, and HP CMU Integration.
“Coprocessors are an integral part of compute cluster and therefore deserve special attention when it comes to managing those additional resources,” explains Gruber. “The most widely used coprocessors are GPUs like the NVIDIA Kepler or the Intel Xeon Phi card. Both have completely different architectures with all their advantages and disadvantages. They also have basic things in common.”
“The current generation is attached on the PCI Express card. They can be used in exclusive and non-exclusive mode. They consume power and produce heat, hence your cluster scheduler needs to manage optimal usage, resource access and it must be able to deal with power and temperature constraints.”
The coprocessor approach to accelerating compute-intensive applications is still gaining traction for workloads that rely on floating point arithmetic, graphics, signal processing, string processing, as well as encryption. Because of their ability to accelerate specific workload types, coprocessors appear in many of the fastest HPC clusters in the current TOP500 list. They can also be very energy-efficient,” says Gruber. “If you look at the Green500 list from June 2014 the first 17 are using GPUs or Intel Xeon Phi, and this hasn’t changed much in the latest Green500 list,” he adds.
With a slide deck that includes code samples, Gruber proceeds to explain how to configure coprocessors and resources in a Univa Grid Engine environment. The agenda covers how resources of different types are configured, how to submit and monitor jobs, working with usage metrics, and optimizing the configuration for NUMA Machines and mixed workloads.
Starting with the configuration of coprocessors and resources in the Univa production cluster, Gruber says the first step is to make Grid Engine aware of new resources by declaring the resource in the resource configuration, which is also known as complex configuration. In order to perform this complex configuration interactively, the user can run qconf -mc (requires being a Grid Engine admin user or a root user on a Grid Engine admin host to do so). This opens the vi editor with the current resource configuration.
In the screenshot you can see the first couple lines and the different types:
Gruber explains why the RSMAP (resource maps) type helps with coprocessors. “It avoids collisions when you have multiple GPUs on one host. It provides a mapping of jobs to resource instances like coprocessor numbers. With an INT resource you can only configure a simple limit, but with the RSMAP you can configure a limit and an identifier for each single instance. Hence, it is now possible for the scheduler to attach those identifiers to jobs and mark them as used instead of handling just the numerical limit. This mapping is visible in qstat -j for the user as well as in the environment variable with the prefix $SGE_HGR_GPU=id.”
Also, per host resource configuration makes it possible to decouple the GPU request from the actual slot request of the job.
Gruber explains that resource requests are multiplied for parallel jobs depending on the “consumable” setting in resource configuration.
After making Grid Engine aware of the new coprocessor resource the user needs to tell Grid Engine how many resources of the new type are attached to its host, says Gruber. This is done in the executional configuration.
“Now that we have made Univa Grid Engine aware of our coprocessors, the next step is using them,” he adds.
In order to submit jobs, the user has to request the resource with the -l switch following a resource name and amount of resources or resource value. Resource requests with the -l switch can also be prefixed with the -soft keyword. This means the scheduler tries to fulfill the resource request but if this is not possible in the current cluster situation, the job can still go through without the specified resource. This only makes sense for string resources, for example, when you test your host with certain model versions of your coprocessor, your code works better. For consumables and RSMAP cannot be used as soft, needs to be requested as a hard resource. You can change the request type with the -hard parameter.
Parallel jobs need to request a parallel environment, and array jobs can also be used with coprocessors. Array jobs are simply multiple instances of the same job which differs with regard to the task, says Gruber. Often jobs are using the task id to access a different data set for each task.
Now that the jobs have been directed to the right coprocessor on the host, the next step involves monitoring the job and resource usage. When having one or more coprocessors on the host, there is interest in getting and processing information on the state of coprocessors, explains Gruber. Production can be enhanced to measure and report arbitrary load values back from compute hosts. This enhancement is done by installing load sensors. Load sensors are external scripts or binaries which are started and stopped automatically be execution host daemon. They follow a very simple protocol sent through the standard in and standard out channel of the application. The load sensor can be written in any scripting or programming language enabling arbitrary code to be executed in order to get load value which are reported back to the executing event. There is a requirement that the measured resources are already known by Univa Grid Engine, which means they must be already configured in the resource or complex configuration before.
In order to add resources to the system, Univa provides scripts for simplifying this task. The user has to have the right permissions when you call the script and the user also needs to be an admin user to perform the necessary commands. Also note the need for the dialog switch and the dialog utility installed on the host first. The process is slightly different for Phi versus GPUs as seen in the slides below:
The final part of the session covers some platform specific settings for the Intel Xeon Phi and how to optimize for NUMA machines and mixed workloads. “With the RSMAP resource type, you not only have the possibility to align jobs to resource instances, you can also attach cores and sockets to that resource,” Gruber points out. The alignment of resource instances to the compute node topology is done with topology masks, a string which describes which cores are usable by a job which got a resources instance granted.
The Senior Software Engineer also explains how it is possible to optimize for mixed workloads, putting coprocessor and non-coprocessor jobs on the same machine by isolating the workload with cgroups. “Mixing workloads becomes important with the continuously growing amount of cores per host,” observes Gruber. “Even when you can configure a host’s exclusive access in Grid Engine for your coprocessor jobs, they most likely don’t scale on the CPU side to fully load all CPU cores. Hence for a better utilization of your computing resources, it is best to allow a mixture of jobs on your compute machines.”
The full video presentation with all slides and code snippets can be downloaded here.