Visit additional Tabor Communication Publications
December 21, 2007
For our last issue of the year, I'll take a look at some of the stories and developments that caught my attention in 2007 -- the big hits, the near-misses, and the strikeouts.
Intel Shifts Into High Gear, AMD Loses Traction
Intel fired on all cylinders this year, while AMD struggled to coalesce its new CPU and GPU business. Intel became the first chipmaker to produce commercial silicon at the 45nm process node. Intel's release of its Penryn processors in November capped a blitz of new offerings that caught AMD flat-footed. AMD's botched launch of its quad-core Opterons left frustrated OEMs and customers in its wake. Suggested holiday viewing for AMD CEO Hector Ruiz: "It's a Wonderful Life."
Sun Opens Up
In 2007, redshifting Sun got more serious about high-end computing with its Constellation supercluster and "Thumper" storage offerings. As a result of the more liberal regime under CEO Jonathan Schwartz, Sun Microsystems cozied up to Intel, Dell, IBM, and even (gasp!) Microsoft. Sun kept busy most of the year promoting open software (and hardware). In September, the company bought the Lustre parallel file system, and vowed to keep it open source too. What's next? Free servers?
D-Wave Systems demo'ed a 16-qubit and 28-qubit quantum computer this year. The company plans to have an online service for quantum computing ready by the end of 2008. D-Wave says that by the middle of 2009 their online service will be available for pricing and risk analysis for the financial community. This will be followed by a quantum simulation capability for chemical, material and life science applications. Does this all make sense? If it does, it can't really be quantum physics.
A Tool Maker Disappears Into the Cloud
PeakStream, the once cutting-edge software tool provider that was going to make multicore, GPGPU and Cell processor application development available for the masses, got swallowed up by Google (which has spent the last couple of years buying acres of intellectual property). Fortunately, another company had been working the same paradigm, but a few months behind PeakStream. Enter RapidMind, stage right.
The usual suspects announced their latest and greatest supercomputers. IBM unleashed Blue Gene/P, the petaflop-era kicker to Blue Gene/L. Cray followed its "Adaptive Computing" instincts with the XT5 and the XT5h hybrid. NEC stayed on the vector path with the SX-9. Sun installed its half petaflop Constellation supercluster at TACC. Applications wanted.
GPGPU Gets Ready for Prime Time
NVIDIA mounted an aggressive campaign to take the early lead in the nascent GPGPU market. In February, the company launched its CUDA development tools for GPU programming and then in June brought out its spring line of Tesla GPU computing hardware. AMD made some noise with its R600 GPU and is now planning for double precision GPU computing next year with its FireStream Stream Processor and associated SDK. Intel is looking to do a GPGPU end-around with its upcoming "Larrabee" manycore products. Stay tuned; it's going to be a knock down, drag-out fight.
FPGAs, Ready and Willing
Lots of small players (Celoxica, DRC Computer Corp, SRC Computers, Nallatech, Mitrionics) looked to take reconfigurable high performance computing to the next level. Results were mixed, but Intel, AMD, HP, SGI and Cray gave them a boost up by making it easier to plug these devices into servers. So far, there are no FPGA systems in the TOP500. The next generation of FPGA chips with more silicon real estate may help. Software standards anyone?
ClearSpeed Eases On the Accelerator
The underdog in the HPC accelerator competition, ClearSpeed has managed to hook up with some of the larger HPC OEMs to garner more visibility. Resellers include HP, IBM and Sun Microsystems, as well as Tao Computing, a Korean supplier of HPC systems. ClearSpeed's main claim to fame is the deployment of its accelerator boards in the TSUBAME super at Tokyo Tech. With all the talk of GPU computing, FPGAs and the Cell processor, as of today TSUBAME represents the only accelerated super on the TOP500 list. Go figure.
In anticipation of more widespread deployment of InfiniBand and 10 Gigabit Ethernet, a bunch of vendors announced optical cable assemblies for cluster interconnects. Intel, Luxtera, Zarlink, Finisar and XLoom all came out with products aimed at slightly different markets and price points. All will be shipping in 2008. The good news for optical cable vendors: the days of copper cables are numbered; the bad news: no one knows what that number is.
InfiniBand Goes Public
Riding the momentum of expanded InfiniBand penetration in HPC, IB vendors Mellanox and Voltaire executed initial public offerings in 2007. IDC's rosy outlook for InfiniBand predicts factory revenue for switches and adapters to grow from $62.3 million in 2006 to $224.7 million in 2011. Furthermore, the analyst firm believes InfiniBand will grow from its HPC base into the larger enterprise market. Vendors selling 10GbE wares are targeting the same applications and are planning to rain on InfiniBand's parade. (For more on this topic, read our feature piece on the InfiniBand-10GbE showdown in this issue.)
TOP500's Extreme Makeover
The latest edition of the TOP500 list, which came out in November, reflected a global fascination with all things HPC. Three of the top five machines come from outside the United States, i.e., Germany, Sweden and India. The number one system is still the IBM Blue Gene/L at Lawrence Livermore, which was recently upgraded from 260 to 478.2 teraflops. The number two system is a 167 teraflop Blue Gene/P installed at Forschungszentrum Juelich. SGI came in at number three with a 126 teraflop Altix ICE cluster that is headed to the New Mexico Computing Applications Center. HP got back in the high-end game with two 100-plus teraflop systems, which claimed the number four and five spots. Even the machines at the bottom of the list were no slackards. Number 500 was a 5.9 teraflop system from the UK.
HPC Growth Spurt Continues
If you can believe the latest numbers from IDC, the HPC business is bulking up like an athlete on steroids. Double-digit growth has become the norm, especially at the low end of the market, where clusters costing less than $50K now give lots of small groups access to mainstream supercomputing. IDC estimated overall HPC server revenue at over $10 billion in 2006 and projected $15 billion in server sales by 2011. That doesn't even count the associated revenue in storage, networking, software and services. More to the point, HPC is outpacing the overall server market, which, without the HPC revenue factored in, is achieving only lackluster growth. All signs point to these trend continuing.
Published a year ago this week, "The Landscape of Parallel Computing Research: The View from Berkeley" became a wake-up call to the computing community about the perils and pitfalls of our manycore destiny. Was anyone listening? Maybe. In the past year, both Intel and Microsoft spent a gazillion dollars for parallel computing R&D and education. Universities like Purdue, LSU, the University of Manchester, MIT and many others are expanding their HPC curriculums for the next crop of students. By the time these kids start to graduate in 2010, the manycore chips will be spilling out of the fabs.
That's All Folks
That should do it for 2007. It's been a privilege for all of us here at HPCwire to help keep you informed about this great and growing HPC community. I'd like to thank you, our readers, as well as our contributors and sponsors, for all your support. We'll be taking a one-week holiday break to recharge our batteries before jumping into the new year with both feet. Our next issue will be published on January 4th. Until then...
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - December 20, 2007 @ 9:00 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.