Tag: Scientific Computing
Randall J. Leveque, Professor of Applied Mathematics at the University of Washington in Seattle, will be conducting a free course that brings the principles of parallelism in high performance computers to those in scientific computing.
Traditionally running scientific workloads in AWS provides a diverse toolkit that allows researchers to easily sling data around different time zones, regions, or even globally once the data is inside of the infrastructure sandbox. However, getting data in and out of AWS has historically been more of a challenge. Cycle Computing’s Andrew Kaczorek and Dan Harris offer some helpful tips on optimizing ingress and egress transfers.
Software engineering is still something that gets too little attention from the technical computing community, much to the detriment of the scientists and engineers writing the applications. Greg Wilson has been on a mission to remedy that, mainly through his efforts at Software Carpentry, where he is the project lead. HPCwire asked Wilson about the progress he’s seen over the last several years and what remains to be done.
Last week at their eScience Workshop at the University of California, Berkeley Microsoft Research announced two key technological progress points related to their Azure cloud. The advancements are currently serving researchers in ecological studies as well as biology and further demonstrate the potential of their cloud offering in further scientific computing projects.
The announcement this morning that Amazon is offering Cluster Compute Instances for EC2 specifically for the needs of HPC users might just be that long-awaited game-changer when it comes to the viability of scientific computing in the public cloud. While it is fresh from a private beta and the results are promising, only time will tell to what degree users will snatch up this opportunity to have supercomputing power on demand.
Researchers from Berkeley Lab are looking at different options available for scientific computing users to move beyond physical infrastructure, including the possibility of deploying public clouds. A recently-published study of Amazon EC2′s handling of data from the Nearby Supernova Factory sheds light on putting large-scale scientific computing into the cloud in practice and in theory.
Since the primary consideration in HPC is performance, it stands to reason that it’s no easy task to convince the scientific computing community that the public cloud is a viable option. Accordingly, a handful of traditional HPC vendors are refining their solutions to bridge the cloud performance chasm that exists in EC2, making the cloud more hospitable for HPC.
As high performance computing vendors polish their server and workstation portfolios with the latest multicore CPU and GPGPU wonders, Pico Computing is quietly making inroads into the HPC application space with its FPGA-based platforms. By picking the spots where reconfigurable computing makes the most sense, the company is looking to leverage its scalable FPGA technology to greatest effect.
In 2009, a number of early adopters in HPC got behind flash storage.
Striking a balance between science and software engineering.