Here is a collection of highlights, selected totally subjectively, from this week’s HPC news stream as reported at insideHPC.com and HPCwire.
10 words and a link
SGI receives second delisting notification
Science in the FY09 omnibus
New HPC platform aimed at financial market
Sun launches SSD solution for servers
Pervasive launches new data-intensive computing app
NVIDIA offers support to new companies building on products
“Mythbusting”: OpenMP supporter rebuts recent blog post
New Japanese plasma simulator in production
“Resource responsible computing” at ORNL
Intel invests in new HPC center in France
Design principles for multicore- and cloud-ready apps
HPC adoption conference announced
SC09 Call for Tutorials released
The ballad of the Massively Parallel Supercomputing Pioneers
NVIDIA reveals plan to develop own x86 products
From a recent blog post at HPCwire:
“I think some time down the road it makes sense to take the same level of integration that we’ve done with Tegra,” said Hara [Michael Hara, senior VP of investor relations]. “Tegra is by any definition a complete computer on a chip, and the requirements of that market are such that you have to be very low power, very small, but highly efficient. So in that particular state it made a lot of sense to take that approach, and someday it’s going to make sense to take the same approach in the x86 market as well.”
He went on to say that it was not a matter of if the company will do this, but when, and gave a two or three year timeframe when we might expect to see the first NVIDIA x86 parts. At that point, SoC architectures will even make sense for larger platforms like small form factor PCs (netbooks and nettops), a market NVIDIA is currently going after with its ION platform. ION incorporates a GeForce 9400 GPU with an Intel Atom CPU on a hand-sized board.
Michael goes on to speculate how this might impact HPC workloads (not directly at first, since they don’t intend to compete in the server space, but could over the long term) and how NVIDIA would license the platform given that they aren’t exactly on Intel’s Friends and Family plan. It’s a good read.
Cray and virtualization provider ScaleMP hook up
From Cray’s Web site, news that Cray is partnering with ScaleMP to use virtualization technology to bring shared memory to the company’s CX1 deskside super:
Cray Inc. and ScaleMP, a leading provider of virtualization solutions for high-end computing, today announced a strategic alliance to offer joint solutions based on the Cray CX1(TM) deskside supercomputer and ScaleMP’s vSMP Foundation. Available immediately, the joint solution will target the High Performance Computing (HPC) segment allowing customers to operate a shared-memory, deskside supercomputer that scales up to 128 cores and 1TB of shared memory.
…
vSMP Foundation aggregates multiple industry-standard, off-the-shelf x86 servers (rack mounted or blade systems) into one single virtual high-end system for the HPC market. vSMP Foundation provides customers with an alternative to traditional expensive symmetrical multiprocessor (SMP) systems and also offers simplified clustering infrastructure with a single operating system. It currently allows customers to create a single virtual SMP system with up to 32 sockets (128 cores) and up to 4 TB of shared memory in an energy-efficient, dense package.
I will be very interested in the performance of this (and the other solutions) that are coming to market right now around software-enabled shared memory. If these solutions start to be performance-competitive with hardware-supported solutions like SGI’s (or, more likely, they are able to use the virtualization hooks that the chip vendors are building into silicon to accelerate the solution) then this could impact SGI’s shared memory business.
AT&T wants to run your datacenter
Well, maybe not your HPC datacenter, but they are definitely lining up to run big enterprise datacenters, and tapping into what Garnter reported was a $19B business in 2008. From an article at The Register:
Thanks to a bunch of acquisitions, AT&T has offered application hosting services and application management services and sold networking services for quite a while. And with the RIM (remote infrastructure management) service announced this week, AT&T is leveraging the expertise it has running some of the largest data centers in the world, deploying its tools and know-how into data centers owned by third parties.
…
AT&T is offering to design, deploy, and maintain the server, storage, and networking gear at a customer site, providing three levels of service with three different price bands. The deals also have provisions to have AT&T babysit the gear and send in technicians to repair or upgrade gear, and the AT&T BusinessDirector portal that enterprise customers use to monitor their AT&T managed services has been tweaked so it can see servers and storage.
AT&T is also pimping its cloud solution these days (though it apparently does not use the word “cloud” at all):
AT&T is not just interested in running your data center. Last August, as part of a $1bn investment in its global computing network last year, the company announced a utility-style computing platform called Synaptic Hosting, which is based on technology created by USinternetworking.
—–
John West is part of the team that summarizes the headlines in HPC news every day at insideHPC.com. You can contact him at [email protected].