3D heart model provides medical students with authentic learning experience.
Advanced imaging facilities at the the Australian Synchrotron and Monash University will leverage the power of GPUs.
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the computing power on display at SC10’s Student Cluster Competition; the University of Portsmouth’s new supercomputer; IBM Watson’s SUSE Linux platform; multicore advances at North Carolina State; and Intel’s new approach to university funding.
Earl Dodd argues that for the HPC cloud to gain practical acceptance as a viable decision-support tool in a wide variety of businesses and industries, it must include Remote Interactive 3D Visualization as a fundamental component of its architecture. Without this vital functionality, the HPC cloud runs the risk of being considered a technological novelty with limited commercial success. However, there are some persistent non-technical barriers that are preventing the full emergence of a broad new user group in the high-performance computing in the cloud space.
Johns Hopkins University researchers are developing a specialized machine for uncovering hidden patterns in data; and Appro HyperPower Cluster will support data analysis at Lawrence Livermore National Laboratory. We recap those stories and more in our weekly wrapup.
Cray announces first multi-cabinet XE6 shipment; and SIGGRAPH brings raft of visualization-related announcements. We recap those stories and more in our weekly wrapup.
The world’s largest public GPGPU computing on-demand service was launched this week at the Siggraph International Conference in Los Angeles. PEER 1 Hosting, a provider of IT infrastructure, has constructed a 128-GPU compute cloud that incorporates NVIDIA Tesla gear and mental image’s RealityServer 3D Web platform.
Ultimately supercomputing is a visual endeavor. Turning the so-called “data deluge” into pretty pictures and animations has always been the most straightforward way to extract insight from HPC simulations. But with the size of simulation datasets growing in tandem with the size of supercomputers, visualization has never been more challenging.
New Mexico’s biggest super at center of statewide academic network.
In a position paper for community input at NSF’s Future of High Performance Computing Workshop in early December, Calit2 Director Larry Smarr reviewed the successes, failures and continuing challenges of the NSF supercomputing program that he helped create.