Visit additional Tabor Communication Publications
October 02, 2012
Deep space exploration missions are constantly sending their data back to Earth for processing and analysis. However, with the ongoing nature of those missions and future planned missions, it may become difficult for space organizations to store and process all of that information over the next 30 years or so.
The solution? Build a supercomputer on the moon. Ouliang Chang, a graduate student at the University of Southern California, presented that idea, which is to be his PhD thesis, at a space conference in Pasadena.
As of right now, the Deep Space Network, which consists of 13 antennas throughout the United States, Spain, and Australia, is ably processing the information coming from deep space. However, bandwidth on Earth is limited. The data available from deep space exploration may not be. According to a 2006 NASA report, over the next three decades they expect an “order-of-magnitude increase in data to and from spacecraft and at least a doubling of the number of supported spacecraft.”
A lunar supercomputer would provide many advantages to its Earth-bound counterparts. One of the biggest challenges to modern supercomputing is the fact that these supercomputers have to be built on Earth and have to be cooled on Earth. And that can be a challenge on our warm little planet
The temperature of the proposed site would hover between 40 and 60 Kelvin as a result of being placed on the bottom of a lunar regolith that would perpetually shield it from sunlight. Remembering that 0 Kelvin is absolute zero, cooling suddenly becomes much more manageable.
Further, high-temperature superconducting materials come into play at 40 to 60 Kelvin. On Earth, it requires a tremendous amount of energy to capture a system’s internal energy and leaving it near superconductivity-level temperatures. On the dark side of the moon, those temperatures are already a reality.
The regolith also would protect the supercomputer from radiation, a significant concern on a surface that is not protected by magnetic fields. The location also offers some protection from asteroids.
The biggest advantage with respect to its mission is its accessibility to the Deep Space Network. Instead of submitting through the crowded Earth signals, the satellites and spacecraft would send their information to the lunar network strategically placed away from all of the Terran electromagnetic noise.
Not surprisingly, such an undertaking would cost plenty of money. While there are some slightly outlandish post-construction funding recovery ideas, such as hosting a sort of robotic moon Olympics, it is more likely that the cost of shipping materials to space will have to decrease significantly before a moon-based supercomputer could become feasible.
From excavating and engineering a site to the actual building of the lunar supercomputer, the monetary commitment would be massive. According to this Wired article, it costs about $50,000 to ship a pound of material into space. It is estimated that the total cost would exceed $10 billion, making it the solar system’s most expensive supercomputer.
However, Chang estimates that the project will still need ten years or so to become technologically feasible. By that point, carbon nanotube technology may progress to the point where a space elevator could be built, drastically decreasing shipping costs. Failing that, a more efficient propulsion system could be built.
A lunar supercomputer could also serve as a backup to Earth systems in a catastrophe, an idea proposed in 2004 by Space Systems Loral. It could also provide data management support for possible future lunar and space missions.
The idea of a lunar supercomputer is seemingly straight out of science fiction. Stanley Kubrick’s science fiction, however, predicted such a supercomputer’s launch to take place eleven years ago. Perhaps with a full embracing of Chang’s ideas and advancements in space shipping, the supercomputer on the moon can be a reality in another eleven years.
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.