Visit additional Tabor Communication Publications
May 17, 2012
BERLIN, May 17 -- After four days, the third "GPU Technology Conference" (GTC), organized by the graphics giant NVIDIA in San Jose, is ending with an out-of-this-world item on the schedule. NVIDIA, a sponsor of the German Google Lunar X-PRIZE team "Part-Time Scientists" (PTS), has invited team members to present the "Day 3 Keynote" speech about the technical status of the lunar rover "Asimov". Azimov, due to land in 2014, will be the first autonomously navigated rover on the Moon. This is made possible through a collaboration with the Institute of Robotics and Mechatronics at the German Aerospace Center (DLR), whose technology will be implemented within the rover of the only German team competing for the Google Lunar X PRIZE.
The autonomous navigation system of Asimov is a major technological leap. While the Russian Moon rovers Lunokhod 1 and 2 in the early 70s were fully controlled from Earth, today’s Mars rovers like NASA’s Mars Exploration Rover "Opportunity", which has been tirelessly exploring the Red Planet since 2004, are autonomous. However, Opportunity requires nearly three minutes to process a pair of images – a delay that causes it to move at an average speed of just 1 cm/sec or less. New developments by the technology partnership between the DLR Institute of Robotics and Mechatronics (RMC) and the PTS have created, for the first time, an autonomous navigation system for a rover that has the capacity to process multiple images per second. The technology boosts a stereo camera that Asimov will use to calculate its own motion, generate a 2.5-dimensional environmental model, evaluate the site and determine a collision-free path - all in real time. PTS team leader Robert Boehme said, "Given, that there is no GPS on the moon, it is important that Asimov can orient itself independently and safely explore unknown territory. The faster it does this, the better”.
Boehme delivers the speech at the GTC together with chief software developer Wesley Faler.
PTS has been supported by the graphic card manufacturer NVIDIA since 2010 with hardware and know-how for their lunar mission. Among a multitude of uses, the GPUs of the Tesla series will be utilized in the calculation of the helical trajectory that the PTS mission must take to the Moon. To claim the $20 million Grand Prize, teams must place a robot on the Moon’s surface that explores at least 500 meters and transmits high definition video and images back to Earth. In addition, the Google Lunar X PRIZE is offering an "Apollo Heritage Bonus Prize" for the team that is able to image artifacts from the Apollo missions. Tesla GPUs are going to be used to process the extensive footage that Asimov will send from the lunar surface, and this will be analyzed to help us understand more about how Apollo artifact materials have survived more than 40 years of exposure to the Lunar environment.
“It’s a search for the "pixels in a haystack," but will nevertheless be managed relatively easily due to GPU computing power,” said Wesley Faler.
ABOUT THE GOOGLE LUNAR X PRIZE
The Google Lunar X PRIZE was created in 2007 by the X PRIZE Foundation with the goal of creating lunar exploration missions that must be at least ninety percent privately financed. Represented in the competition are 26 teams from 16 countries that are endowed with a total of 30 million U.S. dollars. The grand prize of 20 million U.S. dollars will be distributed to the team that fulfills the requirements of the Google Lunar X PRIZE by December 31, 2015. In the event of a similar mission by a government institution, the prize money drops to 15 million U.S. dollars. This prize enables Google to promote private space exploration and the development of new, cost-effective solutions for the aerospace industry.
Source: Part Time Scientists
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.