Here is a collection of highlights, selected totally subjectively, from this week’s HPC news stream as reported at insideHPC.com and HPCwire.
10 words and a link
RapidMind acquired by Intel, but likely not another PeakStream
Cray partners for embedded diagnostics
P&G partners with NCSA, HPC that’s “squeezably soft”
Woven’s assets picked up by Fortinet
UCLA’s $19M super for neuroscience
LLNL and TotalView team up for debugger on 20PF system
DNA origami used to build tiny circuit boards
LAPACK on CUDA beta available for free
SGI terminates graphics efforts
Last week word started to surface about changes afoot at SGI. VizWorld (as far as I know) broke news from inside the company that the entire graphics division had been eliminated. This group was mostly focused on the VUE suite of efforts, but there was a little hardware in there too. By Monday I had talked to even more people, well enough placed to confirm most of the story.
SGI’s CEO posted a 932 word blog entry about a shift in the company’s graphics strategy that left a lot to be desired in terms of actual, you know, content. No word about layoffs, no word about VUE, no real word about strategy except that they think GPUs are swell.
But as this is the sum total of SGI’s response to questions about their vis strategy right now, we’ll assume that the lack of a refutation is confirmation that the graphics division was eliminated (we know for sure that the VP formerly in charge of visualization at SGI has left the company), and we are also assuming that VUE is dead.
How internet-scale businesses think about big data
Gary Orenstein has an interesting post at GigaOm called, How Yahoo, Facebook, Amazon & Google Think About Big Data. These companies all have developed their own approaches to storing petabytes of data that, unlike much of the data in high end computing, actually gets used more than once after it is written.
Yahoo! has MObStor, Facebook has Haystack, Amazon has Dynamo, and then, of course, there is the Google File System.
Since MObStor, based on when information was released, is the new kid on the block, let’s take a look at some of its standout characteristics:
- It’s designed for petabyte-scale content that is site-generated, partner-generated, or user-generated
- Handles tens of thousands of page views every second
- Unstructured storage/objects are mostly images, videos, CSS, and JavaScript libraries
- Reads dominate writes (most data is WORM: write-once read-many)
- Only a low level of consistency is required
- It is designed to scale quickly and efficiently
One thing that all of these approaches have in common is really smart software on top of really cheap hardware. Which is not how most of the storage technology in HPC is built. It will be interesting to see what happens to our storage technologies as more HPC applications come on line to deal specifically with the incredible volumes of unstructured data that businesses and researchers increasingly need to deal with. I wonder if they will push our community into a crisis akin to the one created by the economics of commodity CPU shift?
Space station supercomputer will aid in search for antimatter galaxies, dark matter
redOrbit’s space news section carried an interesting piece over the weekend on a space-borne cluster that will be launched in 2010 as part of a new sensor array to be added to the International Space Station (ISS):
The device that does the actual hunting is called the Alpha Magnetic Spectrometer, or AMS for short. It’s a $1.5 billion cosmic ray detector that the shuttle will deliver to the ISS.
In addition to sensing distant galaxies made entirely of antimatter, the AMS will also test leading theories of dark matter, an invisible and mysterious substance that comprises 83 percent of the matter in the universe. And it will search for strangelets, a theoretical form of matter that’s ultra-massive because it contains so-called strange quarks.
One of the legitimate objections to the cloud model of computing is that the movement of data is a rate limiting factor. This goes double when you are shipping your data down from orbit. Scientists solved this problem by moving the computer to the data:
Many terabytes of data pour out of these sensors, and supercomputers crunch that data to infer each particle’s mass, energy, and electric charge. The supercomputer is part of why AMS must be mounted onto the ISS rather than being a free-flying satellite. AMS produces far too much data to beam down to Earth, so it must carry an onboard supercomputer with 650 CPUs to do the number crunching in orbit. Partly because of this giant computer, AMS requires 2.5 kilowatts of power — far more than a normal satellite’s solar panels can provide, but well within the space station’s 100 kilowatt power supply.
“AMS is basically an all-purpose particle detector moved into space,” [says Nobel laureate Samuel Ting, a physicist at the Massachusetts Institute of Technology, who conceived of the AMS].
—–
John West is part of the team that summarizes the headlines in HPC news every day at insideHPC.com. You can contact him at [email protected].