There are a few themes that run along this week’s pick for the top research items that emerged over the last seven days. Among these are making systems running HPC applications more efficient, both at the VM and storage layers. Further, we present research on energy efficiency, job scheduling and resource sharing.
Without further delay, let’s dive in…
Resetting Resource Priorities
A team of Czech researchers has focused in on an ongoing problem that affects users on shared systems – creating a fair sharing of computational resources.
The researchers note that the as it stands, the user with the smallest CPU needs gets pushed to the front of the line while those with more pressing computational loads get shoved behind. They note that this is inherently unfair as this method “does not reflect other consumed resources like RAM, HDD storage capacity or GPU cores.” In fact, as they note, there is wide variance among users – some who might have heterogeneous needs who are being rated in queue based on CPU time alone.
To address this, they have proposed a new approach to resource sharing that would “allow the development of usable multi-resource-aware user prioritization mechanisms.” They showed how different resources can be weighed and combined in one formula that can reset resource priorities.
The team adds that when it comes to multiple resources, it is not always realistic to expect a completely fair solution, but this approach does take into account the need to move from CPU-only prioritization decisions.
Zapping Energy Costs on IBM Blue Gene/P
A team from the Illinois Institute of Technology and Argonne National Laboratory has explored the greater issues of energy expenses in the overall HPC field, pointing to the variations in electricity prices at different points during the day.
On that note, they propose a “smart, power-aware job scheduling approach for HPC systems based on variable energy price and job power profiles.” At the heart of this is a “0-1 knapsack model” that they say can save on energy costs while also being a flexible and effective way to schedule jobs without degrading system utilization.
To put their theory in practice, they used the approach to design scheduling strategies for the Blue Gene/P using synthetic data and real job traces from production systems. They were able to demonstrate how their power-aware job scheduling approach can reduce energy expenses by up to 25 percent while having only a minimal impact on overall system utilization.
Process Placement in Multicore Clusters
A recent research report from a French team affiliated with Inria discussed how current generations of NUMA node clusters feature multicore processors. They point to how this creates some programming challenges due to the many hardware elements that must be taken into account.
Specifically, they note how with the expected increase of application concurrency and input data size, one of the most important challenges to be addressed in coming years is that of locality, or how to improve data access and transfer within the application.
To address this, they point to an idea that they say can improve the performance of parallel applications by decreasing their communication costs via matching the communication pattern to the underlying hardware architecture.
They detail the algorithm and techniques behind idea, which involves gathering the communication pattern information and the hardware info. They are then able to compute a relevant reordering of the various process ranks of the application. They use those new ranks to reduce the communication costs of the application.
HPC Clouds – Cloud Storage with OpenStack Swift
A team from Los Alamos National Laboratory has revealed how they used the Swift Object Store from OpenStack as their disk-based cloud storage system. For the team, Swift has provided an “open source software for creating redundant, scalable object storage using clusters of standardized servers to store petabytes of accessible data.”
At the heart of this effort is to address growing HPC requirements on the archiving side. They note that just buying more tape or hard drives to keep up with demand is not a viable solution and they believe that “merging advanced features from both HPC systems and cloud systems is a promising direction.”
They reiterate that this is not a file system or real-time storage approach, but rather a “long term storage system for a more permanent type of static data that can be retrieved, leveraged and then updated if necessary.”
As the team behind the project states,
At LANL, we have worked on high-performance computing (HPC) systems for many years. The LANL parallel log file system (PLFS) has demonstrated its superior capability for the conversion of logical N-to-1 parallel I/O operations into physical N-to-N parallel I/O operations on HPC production systems. In this article, we describe the leveraging of the scaling capability of cloud object storage systems and the transformative parallel I/O feature (Fig. 1) of the LANL PLFS and the building of a parallel cloud storage system.