Visit additional Tabor Communication Publications
October 19, 2007
Here's a collection of highlights, selected totally subjectively, from this week's HPC news stream as reported at insideHPC.com and HPCwire.
>>10 words and a link
First customer ship for SiCortex: 6 TFLOPS, less than 20kW;
First Infiniband-based, scalable Network Attached Storage solution;
Linux Magazine on HPC file systems;
New Globus incubator project aims to make HPC easier;
QLogic and partners demo Fibre Channel over Ethernet solution;
New tech from Hitachi: 4 TB 3.5" drive by 2011;
Fujitsu debuts new SAN for small to mid-size enterprises;
Animation Studio ROCKS roll announced;
FSU material could give boost to quantum computing;
>>AMD can sell 'em, they just can't ship 'em
Apparently AMD's new quad-core Opteron processors are selling like hotcakes. The bad news is they can't make them fast enough. Damon Poeter at CMP Channel writes that AMD's partners are generally happy with the new Opteron products, but at least some are having trouble getting a hold of enough parts.
"There are no hardware conflicts and the power draw is as promised. They delivered on their technicals. On these high-performance compute and memory-intensive applications, they're kicking Intel's butt," said Brian Corn, VP of marketing and business development at Waltham, Mass.-based Source Code.
This is a far cry from where we were in June, when partners were publicly questioning whether AMD could get Barcelona's performance high enough to launch (http://insidehpc.com/2007/06/06/barcelona-demod-at-16-ghz-partners-question-july-launch/).
But the news isn't all good; when it comes to delivering on actual product, Corn is not so enthusiastic about AMD's performance.
"We're extremely disappointed with AMD on a product delivery level…. The real problem seems to be is that AMD doesn't have any of these things."
According to some of AMD's channel partners, the company is giving the Tier 1 system vendors and some favored partners first crack at the new quads, leaving the dregs for the channel.
Did AMD push Barcelona out the door too fast or did it just underestimate the demand? According to Corn, AMD hasn't offered an explanation for the hold-up.
Read the full story at http://v3.crn.com/white-box/202402138.
>>IBM makes progress on carbon nanotube-based computing
IBM has announced today that it's been playing around with carbon nanotubes, and has come up with a way to measure the distribution of electrical charges in tubes smaller than 2 nm across. This is an incremental step along the path toward use of carbon nanotubes as semiconductors and wires on chips.
This novel technique, which relies on the interactions between electrons and phonons, provides a detailed understanding of the electrical behavior of carbon nanotubes, a material that shows promise as a building block for much smaller, faster and lower power computer chips compared to today's conventional silicon transistors.
From the announcement (http://www-03.ibm.com/press/us/en/pressrelease/22441.wss):
"The success of nanoelectronics will largely depend on the ability to prepare well characterized and reproducible nano-structures, such as carbon nanotubes," said Dr. Phaedon Avouris, IBM Fellow and lead researcher for IBM's carbon nanotube efforts. "Using this technique, we are now able to see and understand the local electronic behavior of individual carbon nanotubes."
>>Europe's leading super online
German publication heise online is reporting that Jülich Supercomputing Centre has installed a 220 teraflop Blue Gene/P system, which is now the most powerful supercomputer in Europe. The article also points out that since the machine uses a mere 500 kilowatts, the new Blue Gene/P is one of the most energy efficient computers in the world. (For comparison, the 500 teraflop Sun supercluster being installed at the Texas Advanced Computing Center will draw 2.4 megawatts -- almost five times as much for just twice the FLOPS.)
The heise online article also points to a recent presentation given by Alan Gara, the chief architect for Blue Gene, who foresees a persistent supercomputing energy crisis in the years ahead:
Mr. Gara is convinced that it will be possible at some time between 2015 and 2020 to achieve peak performances of 200 petaflops per second, but that the machines capable of such feats will require 25 to 50 megawatts of energy. And this assessment already takes a 20-fold improvement in energy efficiency for granted.
Read the full story at http://www.heise.de/english/newsticker/news/97535.
John West is part of the team that summarizes the headlines in HPC news every day at insideHPC.com. You can contact him at email@example.com. Too busy to keep up? Make your commute productive and listen to the Weekly Takeout, insideHPC.com's weekly audio news summary of the HPC news week in review.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.