Here is a collection of highlights, selected totally subjectively, from this week’s HPC news stream as reported at insideHPC.com and HPCwire.
10 words and a link
Cooling CPUs with “nano-fridges”
Intel to discuss eight core Xeon at ISSCC
Download Rocks for Solaris10 Alpha
Getting more from your power dollar in the datacenter
Supercomputing in the US Senate stimulus bill
Altair releases Personal PBS, and it’s free
Book review: Principles of Parallel Programming
Cloud computing in plain english
OSC expanding IBM supercomputer named for John Glenn
Naval Postgraduate School dedicates new super
U of Toronto building Canada’s fastest super with IBM
Looking inside Sun’s results
Mellanox announces Q4 and FY2008, posts profit
QLogic announces Q3, posts profit
Fast forward to 2012: IBM breaks 20PF
The US Department of Energy has just announced a program awarded to IBM that will birth a 20 petaflop supercomputer by the year 2012. Stop, rewind. That’s correct, 20PF. The system, named Sequoia, will be delivered as a part of the Nuclear Stockpile Stewardship program at Lawrence Livermore National Lab.
It “is the biggest leap of computing capability ever delivered to the lab,” said Mark Seager, the assistant department head for advanced technology at LLNL.
IBM will actually break the delivery into two separate machines. The first, Dawn, will be delivered by mid-year in a 500 TF flavor. Its purpose is to assist researchers to prepare their codes for the upcoming behemoth. Sequoia will lag Dawn by 2 1/2 years to the tune of 1.6 million cores of IBM Power. The final chip configuration has yet to be determined, but the memory bar and footprint have been set at 1.6PB of main memory in 96 racks.
LLNL is not without its own work in the matter. Sequoia is driving a major power upgrade to the computer facilities. Their current 12.5 MW will be increased to 30. Sequoia will chew an estimated 6 MW.
No way around it, this is a huge machine. For more info, read the full article.
Sun CEO says Rock on track for this year
Timothy Prickett Morgan at The Register reported last week on news that Sun CEO Jonathan Schwartz is (publicly) confident that the company’s new Rock processor will finally appear this year. Sun confirmed in February of last year that the chip was being pushed to the second half of 2009, a delay of a year.
The Rock chips, which taped out two years ago, include several new technologies that Sun hopes will give it a competitive advantage in the midrange and high-end of the server market.
Among those technologies, the two biggies are scout threads for the Sparc cores and transactional memory, both of which aim to boost the performance of machines more than can clock speeds and execution threads alone. The Rock chips are expected to have 16 Sparc cores, each with two execution threads.
BYO supercomputer. Worth it?
NetworkWorld posted a feature article this week on Bruce Allen, astrophysicist turned supercomputer manufacturer. Since 1998, Allen has hand-built three clusters in order to further his research in observing theoretical gravitational waves. Allen’s latest contribution to hand-built machines has landed at #79 on the current TOP500 list. The 6,000+ core machine is held together by a gigabit Ethernet backbone, all hand-laid by Allen and his staff at the Max Planck Institute. So why all the work?
“If you go to a company — Dell or IBM — and you say, ‘I’ve got a $2 million budget, what can you sell me for that price?’ you’ll come back with a certain number of CPUs,” he says.
“If you then go and look at Pricewatch or some other place where you can find out how much the gear really costs, you find out that if you build something yourself with the same money you’ll end up with two or three times the processing power.”
Allen’s success with this method goes back to 1998 when he received an NSF grant to purchase some Sun workstations. Rather than buying the workstations, he bought some DEC Alpha machines cheap because they were near end-of-life.
I’ve personally seen and participated in many such grassroots cluster projects. I remember surfing the Fry’s ads looking for specific motherboards on sale. However, it takes a special amount of patience to take this road. Building and supporting machines consisting of 6,000+ cores is very labor intensive. I’ve often thought about the countless hours I spent in the lab as a fuzzy-haired college student debugging bootp issues on 10base Ethernet. Is it worth the effort to do so, or does the vendor-provided turnkey solution really pay off at the end of the day?
I’ll let the audience answer that one. In the mean time, you can read the full article.
Intel Parallel Studio Beta
Intel’s Parallel Studio, about which I wrote in this article for HPCwire, is now available in beta. From Intel:
Intel Parallel Studio, a suite of development tools for C/C++ developers using Microsoft Visual Studio, is now available for beta download. Comprised of Intel Parallel Composer, Inspector and Amplifier, the full suite is the ultimate all-in-one parallelism toolkit that enables Windows developers to create, debug and optimize applications for multicore. To learn more and download Intel Parallel Studio beta or the individual product betas, please visit the Intel Parallel Studio Website.
You can also find webinars on the Studio at the Web site (or here if you want a direct link).
Startup launches virtualized shared memory product
Earlier this week, startup RNA Networks announced that it has launched a software platform that aggregates memory among servers and makes it available to all the servers as a shared memory pool. The company was founded 18 months ago with people from Cray, Akamai, Intel, and QLogic.
The platform at the core of the technology is the Memory Virtualization Platform; the first product based on the MVP is RNAMessenger.
The release doesn’t have much in the way of useful information, but Timothy Prickett Morgan’s article at El Regerino does:
While most server virtualization tools aim to carve up a single box into multiple virtual machines with their own virtual processors, memory, and I/O, RNA’s memory virtualization platform aggregates capacity across servers. In particular, the company’s software aggregates the main memory on server nodes in the network and makes a giant shared pool of virtual memory available to each server node, giving it more room for applications to play.
…The RNA product stack has two elements. The first bit of the memory virtualization platform creates the memory pool from bits of server memory carved out from the individual server main memories inside the servers that are given access to the shared memory pool in the network. This underlying software keeps the memory coherent across the server nodes, much as NUMA and SMP electronics do in hardware.
…The second element to the RNA stack is called RNAmessenger, and it adds a messaging engine and API layer on top of this and a pointer updating algorithm that makes an operating system running on one server node see the shared memory pool as its own main memory. Loadable kernel modules or drivers loaded onto the servers gives applications access to the shared memory and also keeps the global memory coherent. The underlying RNA virtualization can take advantage of RDMA technology, but does not require it. (RDMA allows machines linked to each other to directly access the memory of other servers in a network). One of the first products to support RNAmessenger is IBM’s Cell hybrid Power chip, which has a DMA engine on each chip.
The article provides an example of customer performance, citing an increase from 6,000 transactions per second throughput on a trading system to 53,000 transactions per second. The company claims to target high performance computing, and is supported on Unix and Linux (no Windows yet, and the company doesn’t seem to care from the article), but I think you have to pull supercomputing out of that audience based on price:
RNAmessenger is priced per server node and costs between $7,500 and $10,000 per machine, depending on the configuration and type of the server. The software at the heart of the memory virtualization has been patented and is most certainly closed source.
Shazam. That’s a lot of green. Seems like you’d just buy an SGI machine if you needed shared memory. Seems to be cheaper, cache coherence supported by hardware, and no extra layer of stuff to go wrong.
—–
John West is part of the team that summarizes the headlines in HPC news every day at insideHPC.com. You can contact him at [email protected].