Visit additional Tabor Communication Publications
December 07, 2007
Here is this week's collection of highlights, selected totally subjectively, from the HPC news stream as reported at insideHPC.com and HPCwire.
>>10 words and a link
IBM working on optical transfer of data between cores;
AMD drops off preliminary list of top 10 chip makers;
Cray receives first XT5 order;
BLS: jobs in computer fields and math growing fastest;
IDC: HPC server market grew to $3.0 billion in Q3;
DailyTech's leaked memos on where Intel and AMD are going;
XtreemOS coming to a grid near you;
Star-P gets R, Vista support, and more;
AMD has announced that it has revised its plan to ramp-up production (and subsequent availability) of the quad-core Barcelona chips from Q4 of 2007 to Q1 of 2008. In October, the company stated that Barcelona silicon would be "widely available" by the end of the year.
There has been widespread speculation that the major delay in rollout is due to an erratum in the translation lookaside buffer [TLB]. The issue was made public by AMD some time ago and can be fixed with a BIOS update, but at the supposed cost of 10-20 percent of performance.
Impact on HPC? Unclear. From the DailyTech, we do know that Cray was allowed its allotment (http://www.dailytech.com//Understanding++AMDs+TLB+Processor+Bug/article9915.htm):
AMD partners tell DailyTech that all bulk Barcelona shipments have been halted pending application screening based on the customer. Cray, for example, was allowed its latest allocation for machines that will not use these nested virtualization techniques. Other AMD corporate customers were told to use Revision F3 (K8) processors in the meantime.
The language is a little muddled, since Cray is using Budapest, not Barcelona, but it seems likely that the TLB issue applies in the core part of the platform, not in the externals that differ between Barcelona and Budapest. It does appear that if you're intending to run virtualization software on the chips, you aren't going to be getting Barcelona for a while.
The Register is also covering the fun (http://www.theregister.co.uk/2007/12/06/opteron_delays_amd/), and I'm of a similar mind regarding AMD's evasion of responsibility with the language here. The company is denying this is a "stop ship":
"We haven't changed the shipping pattern," AMD man Phil Hughes told InternetNews. "It's only a stop ship if it's shipping in volume, and we're only shipping Barcelona for specific customer commitments, like larger volume deployments." AMD seems to be fiddling with language, as far as we're concerned.
As far as I'm concerned, too. Dear AMD: take it like a man, fix the problem, and try to stop shooting your own feet for crying out loud.
>>Red Hat announces distributed computing capabilities
This week Linux vendor Red Hat announced a new package that adds distributed computing features based on Condor to its enterprise toolbox. From the release (http://www.redhat.com/about/news/prarchive/2007/mrg.html):
Red Hat today announced Red Hat Enterprise MRG (Messaging, Realtime, Grid), offering new capabilities for deployment on Red Hat Enterprise Linux and third-party operating platforms that further strengthen Red Hat's position as the strategic supplier for critical enterprise applications in highly demanding environments, such as Financial Services and Government agencies. Red Hat Enterprise MRG is a revolutionary distributed computing platform that provides exceptional performance through reliable enterprise messaging, realtime capabilities and advanced grid and high-throughput computing technologies.
With specific regard to the distributed computing features:
Distributed Computing: Red Hat Enterprise MRG enables customers to leverage the full power of distributed computing with commercial- strength grid capabilities, based on the University of Wisconsin's highly respected Condor high-throughput computing project. These capabilities provide customers with a practical means of using their total compute capacity with maximum efficiency and flexibility, while improving the speed and availability of any application. Additionally, Red Hat and the University of Wisconsin have signed a strategic agreement to make Condor's source code available under an OSI-approved license and jointly fund ongoing co-development at the University of Wisconsin.
Seems like the wrong time of year, but there were many beginnings celebrated this week. Here are a few that caught my eye.
First up, the Army High Performance Computing Research Center (AHPCRC) cut a ribbon or two at Stanford University as they looked ahead to five years of contract bliss, and a new 1,600 core Dell cluster. http://insidehpc.com/2007/12/06/ahpcrc-ribbons-cut/
Next we have See3D, a new state-of-the-art visualization center (or centre) in Wales. Mechdyne announced this week they have installed some heavy duty immersive vis gear at See3D, including a 15-ft custom dome display and a PowerWall. http://insidehpc.com/2007/12/06/new-immersive-gear-in-wales/
Not to be outdone, the Holland Computer Center is having opening ceremonies at the end of the week to celebrate the installation of its new 1,150 node AMD quadcore cluster in the Peter Kiewit Institute. http://insidehpc.com/2007/12/06/holland-computing-centers-newest-super/
Then there was Argonne. Governor Rod R. Blagojevich announced this week that the State of Illinois has floated $70M in bonds to build a new 200,000 square foot facility. The new Theory and Computing Sciences Building will be located on Argonne's campus in DuPage County, and will hold over 600 lab employees, an 18,000 square foot research library, labs, and a conference center. http://insidehpc.com/2007/12/06/argonne-puts-on-200000-square-feet/
>>Community resource: scholarships, fellowships and postdocs at insideHPC
I've created a special page to serve as the permanent home for all the information we can scrape together on scholarships, fellowships, and postdocs for students and recent graduates with an emphasis in HPC. You can find it at http://insidehpc.com/scholarships/. If you have more details on the ones I've posted, of if you know about one that I've missed, please drop me a line at email@example.com.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
Jun 12, 2013 |
At 31 petaflops of sustained LINPACK capacity, the new Chinese Tianhe-2 supercomputer will be the fastest supercomputer in the world when this month's Top 500 list comes out, as we reported previously in HPCwire.
Jun 12, 2013 |
HPC system makers are lining up to announce compatibility with the new fourth generation Intel Core processor, codenamed "Haswell." The new Iris GPUs based on the Haswell architecture are giving Intel new credibility in the graphics processing department.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.