Visit additional Tabor Communication Publications
March 31, 2011
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the UK-based Atomic Weapons Establishment's selection of two SGI Altix systems; Platform's new solution for managing "big data"; the effect of rising sea levels on the North Carolina coastal region; SDSC's new portal for conducting phylogenetic research; and the selection of Ian Foster for this year's IEEE Tsutomu Kanai Award.
SGI Supports UK Atomic Weapons Establishment
The Atomic Weapons Establishment (AWE), located in the United Kingdom, has selected two SGI Altix UV 1000 systems to assist with several critical applications, including nuclear deterrence.
In the official release, Ken Atkinson, HPC strategy and procurement manager at AWE, was quoted as saying:
"The breadth and depth of our science, engineering and technology is extensive, and includes several key areas that are central to AWE's work, such as plasma physics, design physics and supercomputing. We require the most advanced high performance computing to support our demanding, large memory applications, and looked to SGI to provide a system on which we can rely. We can now run our largest problem sets in less than half the time it previously took, bringing the total cost of ownership over the next three years to less than 50 percent of the current level."
SGI officials explain that the Altix UV systems were designed for maximum scalability. The fully-integrated cabinet-level solution comes equipped with up to 256 sockets (2,048 cores) and 16 terabytes of shared memory in four racks. All told, a single system image can deliver up to 18.5 teraflops of computing power.
AWE is tasked with manufacturing and maintaining the warheads for the UK's nuclear deterrent, Trident. The SGI system will assist the AWE in meeting its goal of maintaing its nuclear arsenal without the need for actual nuclear testing in accordance with UK laws.
Platform Computing Develops "Big Data" Solution
Platform Computing announced it has created an analytics solution to support "big data" using the Apache Hadoop MapReduce programming model. Platform's distributed analytics platform is fully compatible with MapReduce, providing current MapReduce applications with a smooth transition to the Platform solution, while also supporting multiple distributed file systems.
From the release:
By extending enterprise-class capabilities to MapReduce distributed workloads, customers benefit from the ability to scale to thousands of commodity server cores for shared applications. The results include the ability to perform at very high execution rates, offer IT manageability and monitoring while controlling workload policies for multiple lines of business users and applications and obtain built-in, high availability services that ensure quality of service.
Carl Olofson, research vice president, IDC, describes how the solution addresses customer needs:
"Customers need a robust solution to manage and process their dynamically defined data, their sensor data, and their unstructured data. MapReduce has proven to be a leading tool for analyzing this data, but customers need enterprise-class solutions to ensure manageability and scalability for these environments. Platform is positioned well to provide distributed workload and enterprise class middleware to address these challenges."
In our feature coverage of this story, Editor Nicole Hemsoth reveals how Platform created the new offering by layering APIs over the company's core distributed computing middleware, Platform Symphony. Symphony provides the distributed management and job execution engine, over which the Platform developers affix specific APIs for different job types. Hemsoth explains that "users can manage complexity by using the Symphony framework along with those APIs, and on the backside, using connectors to file systems or databases to serve as I/O for MapReduce jobs."
Rising Sea Levels Could Alter North Carolina Coast
Scientists at the North-Carolina based research organization RENCI are employing sophisticated computer modeling to predict how the changing climate will affect the North Carolina coast in the coming centuries. Specifically, they have been able to show how "increases in sea level over the next 100 years could affect coastal communities, wildlife and the coastline itself."
Researchers are concerned that melting water from glaciers and thermal expansion from warming oceans will raise sea levels by one-half to 1 meter (1.6 to 3.2 feet) over the course of the next century and by 1 meter to 2 meters (6.5 feet) over the next 200 years. If that occurs, the North Carolina coast will undergo an extreme makeover, according to UNC-Chapel Hill marine scientist Tom Shay.
Shay and his colleagues are using the ADCIRC coastal storm surge modeling software and the SWAN (for Simulating WAves Nearshore) wave modeling software to illustrate how future weather events along with higher sea levels could affect the coastal region. The RENCI supercomputer is running multiple storm simulations to generate forecasts about future coastal climate change and associated risks, such as to the inhabitants of the area and to the local economy.
Shay says that if sea level rises by a meter, "we will see higher tides, higher tidal velocities and tidal inundation every day. And we'll have a different shoreline."
The scientists hope that their work will help inform policy, but are also careful to point out that there is a degree of uncertainty when dealing with one-hundred-year timeframes.
CIPRES Gateway Speeds Phylogenetic Analyses
The San Diego Supercomputer Center (SDSC) has unveiled a new resource that facilitates the study evolutionary relationships among large populations of living things. The tool, called CIPRES (CyberInfrastructure for Phylogenetic RESearch) Gateway, is a Web portal that allows scientists to upload their data via a standard Internet browser and perform phylogenetic analyses from any location. Designed for ease-of-use and accessibility, CIPRES allows researchers to generate results in less time without the need for high levels of computer expertise.
Mark Miller, principal investigator in SDSC's Research, Education and Development group, and leader of the CIPRES Gateway project, examines some of the practical benefits that come from a thorough understanding of evolutionary relationships. For example, he explains how "knowing the evolutionary relationships among a group of viruses or bacteria can help doctors understand where an infection came from, effectively treat patients who are infected, and work to contain the spread of disease during an outbreak."
This is a field that relies heavily on high-end computational resources. As Miller notes, "there are only three possible relationships between any four individuals, but there are more than two million different relationships between 10 individuals. A computer that could analyze a million trees per second would require about 20 billion years to test all the possible relationships for just 22 individuals!"As the amount of data grows, so do the computing requirements." That's where the CIPRES Gateway and TeraGrid supercomputers come in. The parallel computing power of the TeraGrid systems allows the large sequencing problems to be broken into smaller pieces that can be run simultaneously across many processor cores.
Operational for a little over a year, the CIPRES Gateway has already assisted more than 2,000 scientists who have used the platform to run more than 35,000 analyses for approximately 100 completed studies in the biological and medical arenas.
Ian Foster Receives Prestigious Tsutomu Kanai Award
The University of Chicago announced that Ian Foster was selected as this year's IEEE Tsutomu Kanai Award recipient. The award, named in honor of former Hitachi president Tsutomu Kanai, is given in recognition of major contributions to state-of-the art, distributed computing systems and their applications, and includes a $10,000 honorarium.
Ian Foster is the director of the Computation Institute, a joint initiative between the University of Chicago and Argonne National Laboratory. He is also the Arthur Holly Compton Distinguished Service Professor of Computer Science at UChicago and an Argonne Distinguished Fellow at Argonne.
Foster is both a leading advocate and a pioneering visionary in the field of distributed computing. As the announcement points out, the methods that Foster and his colleagues have developed "allow computing to be delivered reliably and securely on demand, as a service, and permit the formation and operation of virtual organizations linking people and resources worldwide. These results, and the associated Globus open-source software, have helped advance discovery in such areas as high-energy physics, environmental science and biomedicine. Grid computing methods also have proved influential outside the world of science, contributing to the emergence of cloud computing."
Foster shared his thoughts on the achievement in a prepared statement:
"I am extremely honored to receive this award. Distributed computing is critical for solving complex system-level problems in a wide range of applications, from energy and climate to bioinformatics and molecular engineering, and continues to enable breakthroughs in research across the sciences."
The award ceremony will be held May 25, 2011, in Albuquerque, NM.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.