Visit additional Tabor Communication Publications
September 08, 2006
Vacation's over. The HPC news started slowly after the Labor Day weekend but picked up quickly. IBM, Intel, Linux Networx and the DOE Office of Science all made their presence felt this week. DARPA HPCS was a no-show -- again. But my sense is that it's going to be an eventful autumn for supercomputing.
Intel Gets a Trim
Not everyone will be going back to work after Labor Day. Intel announced it will be cutting its workforce by about 10 percent over the next year, shedding approximately 10,500 workers. This is intended to save billions of dollars and make it more competitive with arch-rival AMD. After getting the message that performance-per-watt is a better strategy than GHz-at-all-costs, Intel may have also decided that they wanted to emulate AMD in another way -- size.
A Makeover for Linux Networx
Intel employees weren't the only ones losing their jobs. Linux Networx (LNXI) has apparently layed off several dozen of its workers as the company shifts its focus to a more software-centric strategy. HPCwire learned of the layoffs a couple of weeks ago and contacted the company for comment. LNXI officials wouldn't talk actual numbers, but they did confirm that they would replace an unspecified number of "hardware engineers and manufacturing people" with an equally unspecified number of software engineers, salespeople and services employees. This is part of the company's ongoing strategy to deliver a more refined supercomputing experience and one that is more focused on delivering turnkey systems.
HPCwire also learned that the company was about to receive an injection of new investment capital. On this topic, Linux Networx was willing to specify numbers -- $37 million to be exact. The money came from its original backers, Oak Investment Partners and Tudor Ventures, as well as a couple of new investors. For the whole story, read our exclusive interview with CEO Robert "Bo" Ewald in this issue.
SciDAC, the Sequel
On Thursday, the DOE's Office of Science announced the second round of funding for its Scientific Discovery through Advanced Computing (SciDAC) program. Approximately $60 million will be invested annually in 30 computational science projects over the next three to five years. The SciDAC-2 projects will focus on the application of petascale computing in a variety of R&D areas such as materials science, energy development, particle physics and climate/environment studies.
According to the DOE, over 350 letters of intent were received for SciDAC consideration, resulting in 240 full proposals. After a month of internal review, the proposals were scrutinized by peer review panels. From there, 30 of the most promising projects, involving 70 institutions, were selected.
To learn more about some of the SciDAC-2 efforts, read our special DOE SciDAC News section in this week's issue. HPCwire will continue to cover specific projects as work commences at the various institutions.
Maybe the most important article in this week's issue is the one that's missing: "DARPA Announces HPCS Phase III Winners." In fact, it's been missing for over two months. The much-anticipated selection of the vendor or vendors for the implementation phase of the High Productivity Computing Systems program was supposed to be announced in late June or early July. So people are starting to wonder what's going on. I've received a number of queries from readers wanting to know when DARPA will gracing us with their decision.
Obviously something unanticipated has occurred. Bill Harrod, HPCS program director, hasn't been talking in public. He was originally scheduled to speak at this week's High Performance Computing Users Conference in Washington on September 7 and at the HPC User Forum in Denver on September 19, but apparently had to cancel both appearances. The few rumors I've heard are unprintable. And the scenarios I've composed in my own head are only fit for an episode of "X Files."
The latest tidbit of information I've come across is that DARPA won't announce until they get a sense of how the budgets turn out in the 2007 Defense Appropriations Bill, which Congress took up again after it reconvened on September 5. So the best estimates are that the Phase III decision won't be announced before late September or possibly even mid-October. But your guess is as good as mine -- unless you're Bill Harrod.
LANL Goes For the Petaflops
This week IBM announced it was selected to build the "Roadrunner" supercomputer for Los Alamos National Laboratory (LANL). Scheduled to be deployed sometime in 2008, the sustained performance for this system is expected to be in the neighborhood of 1.6 petaflops. Such a system promises to boost LANL's standing as a cutting-edge lab for high performance computing.
According to IBM: "The machine is to be built entirely from commercially available hardware and based on the Linux operating system. IBM System x 3755 servers based on AMD Opteron technology will be deployed in conjunction with IBM BladeCenter H systems with Cell B.E. technology."
This is the second announcement of a petaflops-class supercomputer within the past few months. In June, Cray announced that it would be deploying a petaflops supercomputer for Oak Ridge National Laboratory (ORNL), also in the 2008 time frame. The irony here is that IBM will apparently be the first to deploy a modern heterogenous supercomputer, while Cray, the most avid proponent of HPC heterogeneity, will be delivering a homogeneous Opteron-only system to ORNL.
Again, according to IBM: "Roadrunner's construction will involve the creation of advanced "Hybrid Programming" software which will orchestrate the Cell B.E.-based system and AMD system and will inaugurate a new era of heterogeneous technology designs in supercomputing. These innovations, created collaboratively among IBM and LANL engineers will allow IBM to deploy mixed-technology systems to companies of all sizes, spanning industries such as life sciences, financial services, automotive and aerospace design."
Is IBM rebranding Cray's "Adaptive Supercomputing" vision as "Hybrid Programming?" I'm sure we'll be hearing more about this in the coming weeks. Stay tuned.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at email@example.com.
Posted by Michael Feldman - September 07, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.