A team of researchers from MIT’s Geospatial Data Center recently explored new ways to harness the power (and mitigate the challenges) of multicore systems to open new avenues in the geosciences.
The team (led by Christopher Leonardi) behind the award-winning paper acknowledges that multicore is finding its way into more levels of computing. Accordingly, they see the need for scalable programming strategies that apply to this new era in computing.
As the authors state, “With the expense and high demand for compute time on large cluster systems, multi-core represents an attractive and accessible HPC alternative but the well-known challenges of software development on such architectures (thread safety, memory bandwidth issues) must first be addressed”
The collected findings, titled “A Multicore Numerical Framework for Characterizing Flow in Oil Reservoirs” sets forth a numerical framework to allow scalable, parallel execution of engineering simulations on multi-core, shared memory architectures. As the authors describe in their abstract, “distribution of the simulations is done by selective hash-tabling the model domain which spatially decomposes it into a number of orthogonal computational tasks.”
While you can read more about the technical details in the group’s PDF report, the condensed takeaway here is that is is possible achieve near linear scalability using both of the proposed numerical methods utilization efficiency at around 90%.
The group put their work to the test by applying the methods to simulate fluid flow in a porous rock specimen, which is “a problem of broad geophysical significance, and in particular in enhanced oil recovery.”
As MIT’s Sidney Beese described, “this research forms an excellent basis for the investigation and utilization of emergent computational hardware such as NUMA (non-uniform memory memory access) and virtualized shared memory systems in which the subtleties of data caching and communication become increasingly important.”