Visit additional Tabor Communication Publications
A show the size of the Supercomputing Conference is difficult to swallow whole. With hundreds of exhibitors and conference activities, it's virtually impossible to get a balanced perspective. That said, here are a few areas that caught my attention at SC08.
Last week's announcement of the upgraded "Jaguar" system at Oak Ridge National Laboratory had a lot of people, including yours truly, thinking that the Cray super would take the TOP500 crown this time around. It was not to be.
Amid the gloomiest economy in decades, the year's Supercomputing conference -- SC08 -- got underway in Austin, Texas. Despite the worldwide financial turmoil, the 2008 conference may turn out to be the largest SC event of them all, with over 330 exhibitors and more than 10,000 registered attendees.
The announcement of each new TOP500 list and especially those with systems that break the triple order of magnitude barrier in FLOPS tend to get me thinking about the meaning of the term "supercomputer."
In order get to know our HPC Horizons members a little better, we have started this new column. For the first edition of HPC Horizons Community Member SPOTLIGHT, we introduce Laurence Liew, the Open Source Grid Development Centre Director at Platform Computing in Singapore.
The 20th annual Supercomputing (SC) conference launches next week in Austin, Texas. As usual, HPCwire will be providing live coverage, but this year we decided to include some pre-conference guidance for the event.
Barcelona, we hardly knew ye. Today AMD launched its 45nm "Shanghai" quad-core Opterons, sending the ill-fated 65nm Barcelona chips into the microprocessor history books.
Traditional HPC and Edge HPC -- The Same Only Different
Post Date: November 11, 2008 @ 9:00 PM, Pacific Standard Time
Blog: HPC Matters
Tabor Research is in the midst of conducting in depth end user interviews with organizations running or considering Edge HPC applications. As we have completed the initial interviews several similarities and difference between the two branches of high productivity computing have become apparent.
The petascale era is in full swing. Yesterday, the DOE announced that the Cray XT 'Jaguar' supercomputer at Oak Ridge has been upgraded to 1.64 peak petaflops.
It seems hardly a week passes without some news of HPC being delivered as an on-demand service. That topic includes everything from in-house grids to commercial clouds, but it's the cloud element that's grabbing the attention of the supercomputing crowd.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.