Visit additional Tabor Communication Publications
May 05, 2006
This week's issue contains an eclectic mix of news and features from the world of high performance computing. We've covered everything from DARPA's HPCS petascale program to modeling potato chips. In between, we touch on HyperTransport, Dutch clusters, and nanoelectronics. Here are some highlights.
This issue marks the return of the High-End Crusader (HEC), whose unique perspective has been missing from HPCwire for much too long. For our newer readers, HEC is probably our most famous, and certainly our most mysterious contributing editor. He keeps his identity hidden so that he can freely express his opinions without regard for HPC political correctness. Since I'm sworn to secrecy I can't tell you very much about him, but a few tidbits can be revealed:
-- HEC is probably one of the most patriotic people I know (he makes Thomas Jefferson look like a flag-burner). So when he talks about national security computing issues, both his head and heart are involved.
-- As a hardcore HPC'er, he loves "Big Iron," but not to the point that he can't appreciate a nice cluster when he sees one (see below).
-- His favorite television program is "Cold Case." Not a huge surprise. Definitely a show for the analytic type.
-- He says he can't live without Google, but doesn't consider himself a geek. Allegedly he doesn't have a fully functional computer at home.
The stuff I can't reveal would fill the rest of this article. Suffice it to say, he's one of the most interesting characters that I've met in the HPC universe.
Our recent coverage of DARPA's HPCS initiative (April 7) left the High-End Crusader wanting more -- much more. In our lead feature this week, HEC offers his perspectives on DARPA's HPCS initiative and tells us what he believes the government and the vendors should be focusing on. In the process, he gives us a lesson on heterogeneity and on some of the subtler aspects of parallelism and locality. Heady stuff. But worth reading -- if only to witness HEC admit that "clusters remain very cost effective for easily localizable applications."
HyperTransport Part III
The HyperTransport 3.0 spec was released last week and yours truly got the chance to talk with Mario Cavalli, general manager for the HyperTransport Consortium and David Rich, the Consortium's president, about its new capabilities. Our feature article gives some background of the HyperTransport technology and describes some of the new features.
For those of you unfamiliar with the technology, HyperTransport is an elegant, high-performance system interconnect that is designed to replace the older front-side bus architecture used in many systems today. Since HyperTransport's introduction in 64-bit AMD processors, it has steadily gained momentum in the IT industry. As the basis of AMD's Direct Connect Architecture, HyperTransport has been instrumental in propelling the Opteron's success in the marketplace. Intel, the notable non-adopter of HyperTransport, is still developing its own interconnect, called CSI. Originally scheduled for release next year, the latest rumor is that CSI won't be available until 2008; but Intel has said nothing publicly.
Dutch clusters comes up to speed -- and then some
This week's announcement about the Dutch DAS-3 grid (Distributed ASCI Supercomputer) illustrates the worldwide commitment to advancing distributed computing. ClusterVision, which specializes in Linux supercomputer clusters, has been awarded the contract to build the grid.
A Myricom Myri-10G network will provide connections between the servers in four of the five DAS clusters, as well as connections to the grid's SURFnet optical backbone. The announcement provided me with the opportunity to speak with Chuck Seitz, Myricom founder and CEO. According to Seitz, when this system becomes operational in August it's going to be "the fastest grid of clusters in the world." In an upcoming issue of HPCwire, I'll be providing more in-depth coverage of this project and discuss how Myricom sees it as a new model for building distributed computing networks.
HPC for the rest of us
For those of you who missed our special Newportwire coverage of the High Performance Computing and Communications (HPCC) Conference at the end of March, this week I've republished a feature article from that publication about the use of HPC at Procter and Gamble. I talked to Tom Lange, Director of Modeling and Simulation at P&G and he revealed some of the high performance modeling work being done behind the scenes at the company. Tom's very outspoken about the commercial use of HPC and gives us his unique end-user perspective. Besides that, the P&G story is a great example of how HPC is insinuating itself into our everyday lives.
Seems like there's always at least one "gee whiz" article in the issue. This week there's two.
In the first announcement, a UCLA team claims to have made a breakthrough in semiconductor spin-wave technology by developing a technology they call "spin-wave buses." The bus uses an electron's spin wave rather than its electrical charge to transfer information, which apparently is a much more efficient way to push data around. The text that caught my attention was the following:
"UCLA Engineering's team contends that the creation and detection of spin-wave packets in nanostructures can be used efficiently to perform massively parallel computational operations, allowing for the design of the first practical, fully interconnected network of processors on a single chip."
Meanwhile on the East Coast, researchers at University of Pennsylvania, Drexel University and Harvard University announced a novel way to create nanoscale memory. The group has proposed a method that combines nanowires and water to create ultra-dense memory devices. Here's the money quote:
"Though a scheme for the dense arrangement and addressing of these nanowires remains to be developed, such an approach would enable a storage density of more than 100,000 terabits per cubic centimeter. If this memory density can be realized commercially, a device the size of an iPod nano could hold enough MP3 music to play for 300,000 years without repeating a song or enough DVD quality video to play movies for 10,000 years without repetition."
Now that's entertainment!
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - May 04, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.