Visit additional Tabor Communication Publications
Is stream computing the next big thing or is it just an excuse to sell more GPUs? Editor Michael Feldman takes a look at ATI's vision to bring the GPU into high performance computing and how this fits into AMD's plans. Feldman also spotlights some of the feature articles in this week's issue including a story about a Japanese project to develop a 10 petaflop supercomputer, a pair of opinion pieces about the merits of RDMA technology, and two interviews with a couple of die-hard Itanium fans.
In San Francisco this week, Intel execs evangelized the company's vision of the future of computing. CEO Paul Otellini used the Intel Developer Forum as a platform to present the overall product roadmap for the next four years and beyond. CTO Justin Rattner talked about their long-range terascale processor development and the new types of applications that will be using this advanced technology. Editor Michael Feldman takes a look at some of Intel's plans, including their vision for terascale computing.
SGI Prepares to Reboot; Intel Beams About Its Laser
Post Date: September 21, 2006 @ 9:00 PM, Pacific Daylight Time
Blog: From the Editor
The was a lots of interesting news for the HPC crowd this week. SGI arranged for its return from bankruptcy; Intel made a splash with a breakthrough in silicon photonics; and two vendors introduced a couple of unique products. Editor Michael Feldman recaps the week's HPC happenings.
This week's issue marks the beginning of a new HPCwire column: High Performance Careers. The column will focus on career development and education, as well as other employment issues, in the fast-moving world of high performance computing. It's intended for anyone who's interested in maximizing their potential in the high-tech workplace.
Heterogeneous supercomputing is looking more and more like the next big thing in the high performance computing world. Now that IBM has thrown its hat into the ring with its hybrid Opteron-Cell Roadrunner system, it's hard to deny that heterogeneous computing is getting some serious respect. Will HPC turn away from homogeneous architectures and go hetero? Editor Michael Feldman takes a look.
Vacation's over. The HPC news started slowly after the Labor Day weekend but picked up quickly. IBM, Intel, Linux Networx and the DOE Office of Science all made their presence felt this week. DARPA HPCS was a no-show -- again. Editor Michael Feldman reviews some of the most important HPC-related announcements of the week.
If there is anything that will slow down the multi-core juggernaut, it is the lack of software that will run on them. While commodity multi-core chips are well-know fixtures in servers and high performance computers, the highest volume markets, represented by the desktop and laptop segments, are just now getting used to the idea of dual-core processors. Within a relatively short period of time, multi-threaded software has become everyone's problem.
The emergence of successful new programming languages are rare events in the world of information technology. DARPA, through its HPCS program, is attempting to deliver such an event. Editor Michael Feldman examines some of the challenges involved in creating a new general-purpose language for high performance computing and offers a different way to think about the problem.
Cray's recent contract win at NERSC is the latest in a series of good news for the Seattle supercomputer maker. Editor Michael Feldman reviews this current trend of good fortune for Cray. Feldman also talks about what's behind AMD latest dual-core Opteron announcement and offers his perspective on why the company is starting to push its quad-core processor a full year before the chip is scheduled to be released.
With the next-generation AMD Rev F Opteron processors about to hit the streets next week, Editor Michael Feldman takes a look at the current state of the Opteron-Xeon processor rivalry, noting that AMD's use of HyperTransport has become the true differentiator for the company. Feldman also speculates on how Intel might be planning to make up the difference.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.