This week during the R Systems HPC 360 conference sponsored by HPC on-demand provider R ystems in Champaign-Urbana, Illinois I spent some time speaking with Al Stutz, CTO of Avatec, a Springfield, Ohio-based non-profit modeling and simulation research organization with a current emphasis on the aerospace sector. Avatec is involved with an ongoing project that is examining ways the military can reduce the cost and development time for jet turbine engines now, which meshes Avatec’s broader aims to explore potential solutions that will improve HPC performance for companies reliant on simulation and modeling.
Avatec is taking part in the DICE program’s sandbox project at SC10 in New Orleans this year along with several national labs and companies who want to demonstrate the potential of using geographically distributed Infiniband clusters in a common Infiniband mesh so clusters can interoperate to share message and send data back and forth. One of the key initial findings the cooperative wants to prove is that when using Obsidian Longbow products with full encryption, there is no performance impact.
Below is part of a brief chat I had with Al Stutz about the sandbox and what it could mean for users looking for large resources to tackle major problems.
What this could mean is that clusters can be brought together to tackle extraordinarily large problems without the cost of a giant super-number processor system. Stutz claims that “by interconnecting systems across the country with this product from Obsidian, you can extend your Infiniband cluster dramatically by linking together these geographically distributed clusters, scheduling them together and sharing vast resources when you have particularly large problems to address.”
The SCinet “sandbox” demo that will be held at SC by members from NASA Goddard, Lawrence Livermore National Lab and others as they test a WAN file system and transfers using Obsidian ES Encryptors on 10 GbE links from scattered sites and directly to Booth #1149.