Here is a collection of highlights from this week’s news stream as reported by HPCwire.
Pervasive Software Enriches Pervasive DataRush with Scalable Analytics
Concurrent Thinking Spins Out of Parent Company, Raises £1.05M
VLife Teams Up with Computational Research Laboratories
MSU Among 20 Fastest Academic Supercomputing Sites
FLUX Cluster Offers University of Michigan Researchers New HPC Options
RAID Inc. Inks OEM Deal with LSI Corporation
SGI Releases InfiniteStorage 5000 SAS External Storage System
PRACE Calls for One Year Project Grants on Europe’s Fastest Computer
Instrumental Teams with CSC on HPC Work for NOAA
Altera Completes Rollout of 40-nm Stratix IV FPGAs
CEA Releases New Unified Parallel Framework for HPC
Khronos Group Releases OpenCL 1.1 Spec
Sandia to Play Major Role in DOE-Funded Simulation of ‘Virtual’ Nuclear Reactor
SGI Helps Queensland Government Accelerate Climate Science
Victoria Funds MASSIVE HPC Center
Appro Deploys Linux Cluster Testbed at LLNL
This week Appro launched the Data Intensive Testbed Cluster at the Lawrence Livermore National Laboratory (LLNL) to extend the lab’s existing Hyperion system. The solution will provide 80 Appro server nodes configured with ioMemory technology from Fusion-io. The system was created by LLNL for the National Nuclear Security Administration’s Advanced Simulation and Computing program’s Hyperion Project to be used in the development and testing of high performance computing capabilities needed to ensure nuclear safety, replacing underground testing with computer simulations.
The Hyperion project has three objectives: to enable scientists to have I/O test beds for scalable parallel file systems (such as Lustre and CEPH); to allow the evaluation of large scale checkpoint restart mechanisms that don’t depend on global scalable file systems; and to facilitate investigation of cloud-based file systems and analysis tools (such as Hadoop and MapReduce).
According to Steve Conway, IDC Research vice president for high performance computing: “The HPC cluster solution provided by Appro and Fusion-io is designed to give LLNL users significantly more memory and high-speed connections to Lustre than they have had on previous LLNL clusters. In addition, this solution dedicated to testing and scaling is designed to be one of the world’s fastest IO clusters in terms of bandwidth and IOPS critical for improving Linux cluster technologies.”
The Hyperion testbed includes over one hundred terabytes of Fusion-io’s ioMemory modules deployed in ioSAN carrier cards that connect ioMemory over InfiniBand. With the Fusion-powered I/O, the testbed will deliver over 40,000,000 IOPS and 320 GB/s of bandwidth from eighty 1U appliances. Fusion-io’s flash technology consumes a fraction of the power required by traditional memory or hard disk-based alternatives. For more details on Fusion-io’s participation, see their announcement.
According to John Lee, Appro vice president of advanced technologies solutions, Hyperion is the largest Linux cluster testbed in the world. The system will be used to support National Nuclear Security Administration missions, such as national security projects, climate change research and the quest for new energy resources.
SeaMicro Unveils Revolutionary x86 Server
I usually try to retitle press releases when they score too high on my hype-o-meter, but with SeaMicro’s server redesign announcement this week, I let the claim “Unveils Revolutionary Server” stand. Seamicro is a Sillicon Valley-based startup that has recently emerged from stealth mode to launch an “Internet-optimized” x86 server, the SM1000, that uses low-power Atom processors to handle Web-centric workloads, achieving a big reduction in energy costs. In fact, the company claims to have achieved 75 percent power and space savings over traditional servers.
From the release:
In development for three years, the SM10000 is the ultimate re-think of the volume server. Specifically optimized for the workloads and traffic patterns of the Internet, SeaMicro’s SM10000 integrates 512 Intel Atom processors with Ethernet switching, server management and application load-balancing to create a “plug and play” standards-based server that dramatically reduces power draw and footprint without requiring any modifications to existing software.
SeaMicro has created a new server architecture for scale out infrastructures, such as those found in the Web-tier. Traditionally, servers were designed to quickly solve a small number of difficult problems. In the Internet (or cloud) generation, servers need to solve lots and lots of very small problems, such as the ones we all do online every day: searching, social networking, viewing Web pages, and checking email. SeaMicro claims that the mismatch between the volume server design and the “new” type of Internet-sized tasks is at the heart of the huge power problem experienced by datacenters today. SeaMicro also overhauled the rest of the system, the non-CPU components, which are responsible for two-thirds of a server’s power draw, and integrated the functionality of the entire datacenter rack — compute, storage, networking, server management and load balancing — into a single system comprising 512 1.6 GHz Intel Atom processors and 1 terabyte of DRAM in a 10U rack, drawing 2KW of power.
SeaMicro was founded by industry veterans from some of the leading technology companies, such as Cisco Systems, Juniper Networks, Sun Microsystems, Intel, and Advanced Micro Devices (AMD). The company has raised $25 million from strategic partners and venture capitalists and was also awarded a $9.3 million grant from the Department of Energy, which was the largest grant awarded to a server company in the Information and Communication Technology Sector. The SM10000 will be generally available July 30, 2010.
There are more details contained in the release, which you can find here.