Visit additional Tabor Communication Publications
Is there any chance the information technology juggernaut can be managed by a few million brave souls? Hope is on the way.
Wow, 2006 is almost in the books. Editor Michael Feldman recaps the some of the top HPC events and trends of the past year.
In the Information Society that we seem to be inhabiting, it has become a cliché to talk about the insatiable demand for information technology workers. The IT workforce shortage is an annoying reality, but it makes sense. In agricultural societies of the past, a significant percentage of the populace ended up as farmers to serve that economic model. Things are no different in this era; only the economic engine has changed.
The adoption of commodity GPUs and Cell processors into high performance computing is disrupting the comfortable framework of homogeneous x86 computing the industry has enjoyed for the past decade. Where is this technology taking us? Editor Michael Feldman talks about the evolution of GPU computing as seen from the perspective of two industry insiders and reviews some recent work in Europe using the Cell processor for molecular dynamics.
If you think the DARPA HPCS program is just of interest for capability-class supercomputing users -- think again. HPCS, in its most ambitious interpretation, is an attempt to drive a stake through the heart of cluster computing. And the government just anted up almost half a billion dollars to do just that. Editor Michael Feldman talks about some of the ramifications of HPCS as we enter the final phase of DARPA's high productivity computing initiative.
The largest supercomputing conference of the year -- SC06 -- is about to begin.Editor Michael Feldman offers his perspective on the event's controversial keynote speaker, Ray Kurzweil. He also discusses an emerging technology that promises to both simplify multi-threaded programming and improve its performance. And while no one at SC06 may be talking about this technology, it could have profound effects on the future of high performance computing.
In the spirit of the upcoming elections, Editor Michael Feldman ponders the liberal and conservative tendencies of scientists and engineers, and how it affects technological progress -- and how it's manifested in the world of high performance computing.
Virtualization is entering the HPC world. Editor Michael Feldman talks about three vendors who are trying to rewrite the HPC cluster model with hardware that can be dynamically reconfigured to match changing workloads. He also offers some of his thoughts on AMD's plans for processors that combine x86 CPUs with ATI GPUs.
The use of graphics processing units (GPU) for general-purpose computing is poised to change the nature of IT, especially in the HPC community. AMD's merger with ATI Technologies might be the catalyst that drives this new trend.Editor Michael Feldman offers this thoughts on the mainstreaming of GPUs and how this might effect the AMD-Intel rivalry.
Editor Michael Feldman talks about Terra Soft's announcement of the first Cell-processor-based supercomputer cluster and suggests a use for discarded PlayStations. He also offers some comments about a Wired Magazine article that chronicles the rise of petascale data centers and how the IT industry is adapting to this new centralized computing model. Finally, Feldman contemplates the state of the DARPA High Productivity Computing Systems (HPCS) program and wonders when we'll get to Phase III.
Is stream computing the next big thing or is it just an excuse to sell more GPUs? Editor Michael Feldman takes a look at ATI's vision to bring the GPU into high performance computing and how this fits into AMD's plans. Feldman also spotlights some of the feature articles in this week's issue including a story about a Japanese project to develop a 10 petaflop supercomputer, a pair of opinion pieces about the merits of RDMA technology, and two interviews with a couple of die-hard Itanium fans.
In San Francisco this week, Intel execs evangelized the company's vision of the future of computing. CEO Paul Otellini used the Intel Developer Forum as a platform to present the overall product roadmap for the next four years and beyond. CTO Justin Rattner talked about their long-range terascale processor development and the new types of applications that will be using this advanced technology. Editor Michael Feldman takes a look at some of Intel's plans, including their vision for terascale computing.
SGI Prepares to Reboot; Intel Beams About Its Laser
Post Date: September 21, 2006 @ 9:00 PM, Pacific Daylight Time
Blog: From the Editor
The was a lots of interesting news for the HPC crowd this week. SGI arranged for its return from bankruptcy; Intel made a splash with a breakthrough in silicon photonics; and two vendors introduced a couple of unique products. Editor Michael Feldman recaps the week's HPC happenings.
This week's issue marks the beginning of a new HPCwire column: High Performance Careers. The column will focus on career development and education, as well as other employment issues, in the fast-moving world of high performance computing. It's intended for anyone who's interested in maximizing their potential in the high-tech workplace.
Heterogeneous supercomputing is looking more and more like the next big thing in the high performance computing world. Now that IBM has thrown its hat into the ring with its hybrid Opteron-Cell Roadrunner system, it's hard to deny that heterogeneous computing is getting some serious respect. Will HPC turn away from homogeneous architectures and go hetero? Editor Michael Feldman takes a look.
Vacation's over. The HPC news started slowly after the Labor Day weekend but picked up quickly. IBM, Intel, Linux Networx and the DOE Office of Science all made their presence felt this week. DARPA HPCS was a no-show -- again. Editor Michael Feldman reviews some of the most important HPC-related announcements of the week.
If there is anything that will slow down the multi-core juggernaut, it is the lack of software that will run on them. While commodity multi-core chips are well-know fixtures in servers and high performance computers, the highest volume markets, represented by the desktop and laptop segments, are just now getting used to the idea of dual-core processors. Within a relatively short period of time, multi-threaded software has become everyone's problem.
The emergence of successful new programming languages are rare events in the world of information technology. DARPA, through its HPCS program, is attempting to deliver such an event. Editor Michael Feldman examines some of the challenges involved in creating a new general-purpose language for high performance computing and offers a different way to think about the problem.
Cray's recent contract win at NERSC is the latest in a series of good news for the Seattle supercomputer maker. Editor Michael Feldman reviews this current trend of good fortune for Cray. Feldman also talks about what's behind AMD latest dual-core Opteron announcement and offers his perspective on why the company is starting to push its quad-core processor a full year before the chip is scheduled to be released.
With the next-generation AMD Rev F Opteron processors about to hit the streets next week, Editor Michael Feldman takes a look at the current state of the Opteron-Xeon processor rivalry, noting that AMD's use of HyperTransport has become the true differentiator for the company. Feldman also speculates on how Intel might be planning to make up the difference.
There are certain skills that every aspiring technologist should have, but that are not being taught very well in schools or the workplace. Editor Michael Feldman spotlights a couple of individuals who are trying to drive these productivity-enhancing skills into the technology community. Feldman also offers some perspective on IBM's renewed love for the AMD Opteron and talks about some recent developments in the use of Cell processors.
The commercial growth of HPC over the last two decades has fundamentally changed that economic realities of high performance computing. But the dichotomy between government and industrial applications of HPC has created a tension that challenges the future of supercomputing. Is the government losing its direction in HPC? Editor Michael Feldman examines some of the forces at work in this ever-evolving struggle.
After a delay of nearly a year, this week Intel finally launched its dual-core Itanium 2 Processor 9000 series (formerly code-named Montecito). Editor Michael Feldman talks about the significance of Intel's new offering.He also offers a few thoughts on global warming (from the comfort of his air-conditioned office).
With all the talk of heterogeneous supercomputing over the last few years, one might get the impression that a revolution is on the horizon. Editor Michael Feldman discusses some of the forces behind this new computing model and offers his perspective on how it might attain mainstream status.
As summer gets into full swing, Editor Michael Feldman turns his attention to the power and cooling "crisis" in high performance computing. Feldman reviews some recent articles on this hot topic and offers some thoughts of his own.
Thanks to this week's International Supercomputing Conference (ISC), today's issue of HPCwire is one of the largest of the year. ISC always seems to put a charge into the HPC community and 2006 was no exception. This week, Editor Michael Feldman provides a rundown on some of this issue's highlights as well as some pointers to a few exceptional articles from our special ISC coverage on Wednesday and Thursday.
The HPC community has been reaching towards systems capable of petaflops performance ever since the teraflops barrier was conquered back in December of 1996. Today systems that will execute a quadrillion floating point operations per second are close to becoming a reality. In fact, one such system may already exist. Editor Michael Feldman observes some recent developments that suggest we are on the threshold of the petaflops era of supercomputing.
Editor Michael Feldman ponders why software seems to be so resistant to engineering, especially as compared to hardware. Inspired by research being conducted by a computer scientist at the University of Wisconsin, Feldman offers his perspective on why software is so problematic and why it is often perceived with such disdain.
On June 9, Microsoft announced its first production version of Windows Compute Cluster Server 2003. Editor Michael Feldman offers a few thoughts about what this new product could mean for the high performance computing community.
While pondering the huge response to last week's article on the potential of the Cell processor for high performance computing, editor Michael Feldman offers some perspective on the growing interest in this innovative architecture. The High-End Crusader, also provides some observations on the Cell's suitability as a general-purpose parallel computing platform.
Editor Michael Feldman has a few words to say about AMD's entry into the world of Dell. He also offers his perspective on the processor performance battle brewing between Woodcrest and Opteron, where Intel just fired the first shot. Lastly, Feldman notes that SGI is getting serious about the enterprise market and it has the benchmarks to prove it.
Today, one of the biggest impediments to high performance computing application development is the difficulty of writing software for cluster architectures. Editor Michael Feldman talks about two new developments that may ease this burden.
The HPC community is still absorbing the news of SGI's bankruptcy filing that was announced on Monday. Editor Michael Feldman offers his perspectives on the company's fortunes. He also scolds Microsoft for offering up yet another pre-release version of Windows Compute Cluster Server 2003.
This week's issue has an eclectic mix of articles from the world of high performance computing. We've covered everything from DARPA's HPCS petascale program to modeling potato chips. In between, we touch on HyperTransport, Dutch clusters, and nanoelectronics.
The Itanium microprocessor has endured a controversial existence that has polarized not just the industry watchers, but the industry itself. First introduced in 2001, the Itanium was advertised as the next generation 64-bit microprocessor that was destined to replace RISC architectures. HPCwire editor Michael Feldman offers some of his perspectives on the Itanium's bumpy ride through history.
With the popularity of the 64-bit x86 architecture in the high performance computing market now established, a lot of us in the HPC community closely follow the rivalry between the two chip vendors, AMD and Intel. Rivalries are fun, especially when it's a "David and Goliath" story. But a lot is on the line. At a time when the demand for commodity clusters and blade servers is rapidly growing, these two companies have much to gain and just as much to lose.
In this issue, three of our feature articles focus on some of the top vendors vying for supercomputer leadership -- Cray, IBM and Linux Networx. Though quite different in product offerings and corporate strategy, all these companies have had and, hopefully, will continue to have a significant role in the high performance computing market.
In this special issue of HPCwire, all of our feature articles are devoted to DARPA's High Productivity Computer Systems program. The program's ambitious goals are to take supercomputing to the petascale level and increase overall system productivity ten-fold by the end of this decade.
No Recent Blog Comments
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
The Top 500 list of the world's fastest computers has just been announced. Not surprisingly, since it's been reported on prior to the official announcement, the Chinese Tianhe-2 system tops the list. And that is an understatement. We talk with Jack Dongarra, Horst Simon, Hans Meuer and others from the....
Outside of the main attractions, including the keynote sessions, vendor showdowns, Think Tank panels, BoFs, and tutorial elements, the International Supercomputing Conference has balanced its five-day agenda with some striking panels, discussions and topic areas that are worthy of some attention....
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
Jun 12, 2013 |
At 31 petaflops of sustained LINPACK capacity, the new Chinese Tianhe-2 supercomputer will be the fastest supercomputer in the world when this month's Top 500 list comes out, as we reported previously in HPCwire.
Jun 12, 2013 |
HPC system makers are lining up to announce compatibility with the new fourth generation Intel Core processor, codenamed "Haswell." The new Iris GPUs based on the Haswell architecture are giving Intel new credibility in the graphics processing department.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.