Visit additional Tabor Communication Publications
December 15, 2006
This is the last HPCwire issue of 2006. I thought I'd take some time and quickly recap some of the bigger events and trends of the year. There certainly was no shortage of stories.
The Empire Strikes Back
Intel rebounds against AMD with a brand new processor microarchitecture. The Xeons finally achieve performance parity (and then some) against the current crop of Opterons. Intel also brings us 65nm chips and the first (sort of) quad-core processor. AMD swallows its pride and ATI in one gulp and proposes to change the game with its newfound GPU division.
The supercomputer maker has one of its best years in recent memory, gathering several high-profile supercomputing wins and capping the year with the DARPA HPCS selection. "Adaptive Supercomputing" opens to positive reviews.
The Fall and Rise of Silicon Graphics
SGI crashes, burns and resurrects itself in the space of a few months. Even the business cycles are accelerated in HPC. Best of luck in 2007.
InfiniBand gets some new respect from the HPC cluster crowd this year, jumping to 20 Gbps to remind everyone why 10 GbE is not the ultimate answer to HPC interconnects. IB is still a question mark in the enterprise though. We'll see what the OEMs do.
With the help of $10 billion from the Itanium Solution Alliance and the introduction of the dual-core Montecito processor, the Itanium is poised for stardom. At least that's what Intel, HP and IDC are telling us. 2007 may be a make or break year.
DARPA the Decider
After a six month delay, DARPA makes its HPCS Phase III selections, rewarding both IBM and Cray with a quarter of a billion dollars to reinvent supercomputing. To IBM, that's just lunch money, but for Cray, that's serious coin. One question: "What are you guys really building?"
Sun takes its "Proximity Communication" technology and goes home.
HPC Gets Bored with CPUs
Techies decide CPUs can't be the answer to everything. Accelerators come into vogue.
Cell: Mercury Computer Systems and IBM unveil Cell-based HPC gear in 2006. Cell will go into the Roadrunner petaflop super at Los Alamos. Popularity cuts both ways -- it's easier to buy a Cell blade than a PlayStation 3.
GPUs: GPUs may be the next general-purpose processing engine. AMD and NVIDIA develop products for the budding GPGPU market. Can Intel be far behind?
FPGAs: HyperTransport enables some interesting solutions. Still working on the programming model though -- hope springs eternal.
ORNL and LANL order the first two petaflop-class machines for 2008. The only real surprise: neither one is Blue Gene. In Japan, Riken announces a special-purpose petaflop machine, MDGrape-3.
The 800-Core Gorilla
The HPC community obsesses over multi-core. The consensus: "We're not ready." That's OK. Neither is the rest of the industry.
Till Next Year
I'd like to thank our readers, contributors, sponsors and the entire staff at Tabor Communications for all their support this year. I'd also like to show my appreciation to all the media relations folks who fill up my Inbox every day with wondrous tales about the most inspired group of people in the IT industry -- the HPC community. As for me, I will be taking a much-anticipated two-week holiday break. Our next issue will be published on January 5th, 2007.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - December 14, 2006 @ 9:00 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.