Visit additional Tabor Communication Publications
September 07, 2007
Here's a collection of highlights, selected totally subjectively, from this week's HPC news stream.
>>10 words and a link
What about all those cores? HPC enters the mainstream;
BNL researchers use Sun's Network.com for quark-gluon plasma;
Independent study says AMD more power efficient than Intel;
AMD continues blame death spiral rather than creating customer value;
Hawking's COSMOS research consortium buys new SGI Altix;
IBD re-running retrospective piece on Seymour Cray's career;
>>Intel launches new quadcore Xeon 7300 series
Intel Corporation has unveiled the industry's first quad-core processors specifically designed for multi-processor (MP) servers running applications requiring uncompromised performance, reliability and scalability.
The press machine over at Intel is, well, enthusiastic. But this is news.
There are six new processors in the 7300 line, and today's announcement marks the completion of the company's transition to the new Core microarchitecture. 7300 Xeons top out at 2.93 GHz (130 watts) at the high end, and run all the way down to an eco-friendly 1.86 GHz model (50 watts). In addition to having twice the cores the 7300 also supports four times the memory of Intel's previous MP products.
>>HP, Verari add support for Intel quadcore
Verari announced this week that they've added support for the Intel Xeon multi-processor (MP) server platform featuring Quad-Core Intel Xeon Processors (the 7300 series).
From the company's release (http://www.verari.com/news/archive/PR090507.asp):
"Verari Systems' BladeRack 2 is a powerful and dense blade server platform built for virtualization and consolidation," said David B. Wright, CEO and Chairman of the Board of Verari Systems. "The addition of the Intel Xeon MP Platform to our family of highly scalable systems provides our customers with the type of enterprise reliability not available elsewhere and will further increase the IT efficiency and system utilization for VMware customers."
And HP expanded its quadcore offerings this week as well with the addition of the same Xeon 7300s to the ProLiant line (http://biz.yahoo.com/bw/070906/20070905006548.html?.v=1):
The rack-based HP ProLiant DL580 G5 server and the HP ProLiant BL680c G5, HP's first four processor (4P) quad-core server blade, offer increased performance with double the number of processor cores. In addition, the DL580 G5 has double the memory capacity of its predecessor.
Verari's offering will be available in Q4 according to the company; you can pick yourself up a shiny new ProLiant from HP today.
>>The Facebook of HPC
ClearSpeed Technology announced this week that they've licensed their accelerator technology (the same magic sauce that pumped up Japan's TSUBAME super and made ClearSpeed the darling of SC06) to BAE Systems for use in BAE's satellites.
What does this have to do with HPC or Facebook? Hang with me.
I got to talk to ClearSpeed CEO Tom Beese last week (thanks Christin!) and we had a very lively and wide ranging discussion. It turns out that the technology ClearSpeed is licensing to BAE is the same stuff they sell for plugging in to supercomputers. While BAE is understandably close-lipped about what happens inside the case of these billion dollar devices, word is that they'll be used for in-flight data processing tasks, not satellite control functions.
ClearSpeed chose to license the technology rather than simply producing boards for BAE because the harsh operating conditions in space dictate production methods for electronic components that are not in line with ClearSpeed's primary business. You can read more at http://www.hpcwire.com/hpc/1760618.html.
What's really interesting about this story, though, is the peek it gives us into the future. Various pundits and people paid to have opinions are predicting anywhere from 80- to 128-core chips by the end of this decade. The price of a computation engine is falling dramatically, and we can predict that eventually cores will be "free" (that's free as in cell phone, not free as in beer).
At that point, which is likely not too far off, there will be no undoing the commoditization of HPC, and even IBM might finally give up the ghost and bury its Power line. This doesn't mean we'll have to give up specialized processing though, and the ClearSpeed deal points the way.
Although vendors have been talking about constellations of heterogeneous processors for some time, it's not been clear how the economics of the industry would allow a robust market to develop for these products at reasonable prices. Ganging components together in ways specific to HPC will always be too expensive relative to the dynamics of commodity product pricing. And niche companies developing coprocessors (or accelerators or whatever) for HPC would likely never be large enough to become reliably innovative over the long run.
But as cores become free, opportunities open up for processing in entirely new markets and products, and existing markets have the opportunity to enhance the function of processing in their products. With both major chipmakers adopting non-bus based architectures and opening their interconnects to third party hardware makers (as in AMD's Torrenza initiative) there is a clear opportunity for accelerator companies to turn themselves into platforms for specialized computing.
It works like this. ClearSpeed (and companies like them) develop an acceleration vehicle with specific customization points. The accelerator developed for HPC transforms into a real-time performance analytics engine in Ford's next generation vehicle by simply swapping out or reprogramming an ASIC. These devices can stay relatively simple because of the wealth of cores lying around that can be harnessed to support their functions, and they will remain relatively inexpensive because there is a demand in so many markets. And the companies making the devices will become large and diverse enough to survive mistakes in any one market, and they'll have a shot at innovating reliably over the long term.
It may be a while before software can take advantage of all the power we're creating, but a hardware solution may be just around the corner.
John West summarizes the headlines in HPC news each week for HPCwire. You can reach him at john.e.west (at) gmail.com.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.