Visit additional Tabor Communication Publications
July 25, 2012
This week, NASA announced it would soon be launching a new HPC and data facility that will give earth scientists access to four decades of satellite imagery and other datasets. Known as the NASA Earth Exchange (NEX), the facility is being promoted as a "virtual laboratory" for researchers interested in applying supercomputing resources to studying areas like climate change, soil and vegetation patterns, and other environmental topics.
Much of the work will be based on high-resolution images of Earth that NASA has been accumulating since the early 70s, when the agency began collecting the data in earnest. Originally known as the Earth Resources Technology Satellite (ERTS) program, and later renamed Landsat, its mission was to serve up images of the earth, allowing scientists to observe changes to our planet over time. This includes tracking forest fires, urban sprawl, climate change and a host of other valuable information. Data generated by these satellites has been extremely popular in the global science community. In the last 10 years, more than 500 universities around the globe have used Landsat data to support their research.
Over time though, the program's growth created a logistical problem. Multiple datasets eventually spanned facilities around the US, which presented challenges for researchers looking to retrieve satellite imagery. Recognizing the issue, NASA created the NEX program with the goal of increasing access to the three-petabyte library of Landsat data.
NEX will houses all data generated by Landsat satellites and related datasets, as well as offering analysis tools powered by the agency's HPC resources. We spoke with NASA AMES Earth scientist Ramakrishna Nemani, who explained the purpose behind the NEX facility and how it has been implemented. "The main driver is really big data," he told HPCwire. "Over the past 25 years we have accumulated so much data about the Earth, but the access to all this data hasn't been that easy."
Prior to NEX, he said, researchers would be tasked with locating, ordering and downloading relevant data. The process could be time consuming because the satellite imagery they wanted could be housed at one or more locations. Even after locating the desired images, data transfer times would often be prohibitive.
NASA set out to solve the problem, leveraging one of their strongest assets: supercomputing. The agency decided to take all of the disparate datasets and migrate them to the AMES research center. "We said 'let's do an experiment.' We already have a supercomputer here at AMES, so we can bring all these datasets together and locate them next to the supercomputer," said Nemani.
That system, known as Pleiades, is the world's largest SGI Altix ICE cluster and the agency's most powerful supercomputer. Pleiades has been upgraded over time accumulating several generations of Intel Xeon processors: Harpertown, Nehalem, Westmere, and, most recently Sandy Bridge. For extra computational horsepower, the Westmere nodes are equipped with NVIDIA Tesla GPUs. Linpack performance is 1.24 petaflops, which earned it the number 11 spot on the June 2012 TOP500 list.
The system also includes 9.3 petabytes of DataDirect storage. Given that, AMES is now able to host the three petabytes of image data at a single location. But NEX was created to do more than hold all the satellite imagery under one roof. A collection of tools was developed to help researchers analyze the data using the Pleiades cluster.
For example, a scientist could create vegetation patterns with the toolset, piecing together images like a jigsaw puzzle. The program estimates that processing time for a scene containing 500 billion pixels would take under 10 hours. Without the NEX toolset, scientists would have to create their own computational methods to perform similar research.
While making Pleiades' compute resources available was beneficial for researchers, it posed somewhat of a challenge for the NEX project team, since a certain level of virtualization is required to support concurrent access. The marriage of virtualization and supercomputing can be "tricky business," according to Nemani, but the program had a unique plan in this regard.
"We have two sandboxes that sit outside of the supercomputing enclave," he said. "We bring in people and have them do all the testing on the sandboxes. After they get the kinks worked out and they're ready to deploy, we send them inside."
Eventually, the program would like to have scientists run their own sandbox program and upload it to the supercomputer as a virtual machine.
While NEX has some cloud elements to it, NASA could not feasibly run the project on a public cloud infrastructure. "We are trying to collocate the computing and the data together, just like clouds are doing. I would not say this is typical cloud because we have a lot of data. I cannot do this on Amazon because it would cost me a lot of money," said Nemani
The NEX program also features a unique social networking element, which allows researchers to share their findings. It's not uncommon for scientists to move on after working a particular topic. However, this reduces access to codes and algorithms utilized in their research. These social media tools provided by NEX allow peers to go back and verify the results of previous experiments. Combined with access to HPC and the legacy datasets, the facility provides what may be the most complete set of resources of its kind in the world.
"Basically, we are trying to create a one-stop shop for earth sciences," said Nemani.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.