Visit additional Tabor Communication Publications
September 08, 2006
Vacation's over. The HPC news started slowly after the Labor Day weekend but picked up quickly. IBM, Intel, Linux Networx and the DOE Office of Science all made their presence felt this week. DARPA HPCS was a no-show -- again. But my sense is that it's going to be an eventful autumn for supercomputing.
Intel Gets a Trim
Not everyone will be going back to work after Labor Day. Intel announced it will be cutting its workforce by about 10 percent over the next year, shedding approximately 10,500 workers. This is intended to save billions of dollars and make it more competitive with arch-rival AMD. After getting the message that performance-per-watt is a better strategy than GHz-at-all-costs, Intel may have also decided that they wanted to emulate AMD in another way -- size.
A Makeover for Linux Networx
Intel employees weren't the only ones losing their jobs. Linux Networx (LNXI) has apparently layed off several dozen of its workers as the company shifts its focus to a more software-centric strategy. HPCwire learned of the layoffs a couple of weeks ago and contacted the company for comment. LNXI officials wouldn't talk actual numbers, but they did confirm that they would replace an unspecified number of "hardware engineers and manufacturing people" with an equally unspecified number of software engineers, salespeople and services employees. This is part of the company's ongoing strategy to deliver a more refined supercomputing experience and one that is more focused on delivering turnkey systems.
HPCwire also learned that the company was about to receive an injection of new investment capital. On this topic, Linux Networx was willing to specify numbers -- $37 million to be exact. The money came from its original backers, Oak Investment Partners and Tudor Ventures, as well as a couple of new investors. For the whole story, read our exclusive interview with CEO Robert "Bo" Ewald in this issue.
SciDAC, the Sequel
On Thursday, the DOE's Office of Science announced the second round of funding for its Scientific Discovery through Advanced Computing (SciDAC) program. Approximately $60 million will be invested annually in 30 computational science projects over the next three to five years. The SciDAC-2 projects will focus on the application of petascale computing in a variety of R&D areas such as materials science, energy development, particle physics and climate/environment studies.
According to the DOE, over 350 letters of intent were received for SciDAC consideration, resulting in 240 full proposals. After a month of internal review, the proposals were scrutinized by peer review panels. From there, 30 of the most promising projects, involving 70 institutions, were selected.
To learn more about some of the SciDAC-2 efforts, read our special DOE SciDAC News section in this week's issue. HPCwire will continue to cover specific projects as work commences at the various institutions.
Maybe the most important article in this week's issue is the one that's missing: "DARPA Announces HPCS Phase III Winners." In fact, it's been missing for over two months. The much-anticipated selection of the vendor or vendors for the implementation phase of the High Productivity Computing Systems program was supposed to be announced in late June or early July. So people are starting to wonder what's going on. I've received a number of queries from readers wanting to know when DARPA will gracing us with their decision.
Obviously something unanticipated has occurred. Bill Harrod, HPCS program director, hasn't been talking in public. He was originally scheduled to speak at this week's High Performance Computing Users Conference in Washington on September 7 and at the HPC User Forum in Denver on September 19, but apparently had to cancel both appearances. The few rumors I've heard are unprintable. And the scenarios I've composed in my own head are only fit for an episode of "X Files."
The latest tidbit of information I've come across is that DARPA won't announce until they get a sense of how the budgets turn out in the 2007 Defense Appropriations Bill, which Congress took up again after it reconvened on September 5. So the best estimates are that the Phase III decision won't be announced before late September or possibly even mid-October. But your guess is as good as mine -- unless you're Bill Harrod.
LANL Goes For the Petaflops
This week IBM announced it was selected to build the "Roadrunner" supercomputer for Los Alamos National Laboratory (LANL). Scheduled to be deployed sometime in 2008, the sustained performance for this system is expected to be in the neighborhood of 1.6 petaflops. Such a system promises to boost LANL's standing as a cutting-edge lab for high performance computing.
According to IBM: "The machine is to be built entirely from commercially available hardware and based on the Linux operating system. IBM System x 3755 servers based on AMD Opteron technology will be deployed in conjunction with IBM BladeCenter H systems with Cell B.E. technology."
This is the second announcement of a petaflops-class supercomputer within the past few months. In June, Cray announced that it would be deploying a petaflops supercomputer for Oak Ridge National Laboratory (ORNL), also in the 2008 time frame. The irony here is that IBM will apparently be the first to deploy a modern heterogenous supercomputer, while Cray, the most avid proponent of HPC heterogeneity, will be delivering a homogeneous Opteron-only system to ORNL.
Again, according to IBM: "Roadrunner's construction will involve the creation of advanced "Hybrid Programming" software which will orchestrate the Cell B.E.-based system and AMD system and will inaugurate a new era of heterogeneous technology designs in supercomputing. These innovations, created collaboratively among IBM and LANL engineers will allow IBM to deploy mixed-technology systems to companies of all sizes, spanning industries such as life sciences, financial services, automotive and aerospace design."
Is IBM rebranding Cray's "Adaptive Supercomputing" vision as "Hybrid Programming?" I'm sure we'll be hearing more about this in the coming weeks. Stay tuned.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - September 07, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.