Visit additional Tabor Communication Publications
The last 12 months of HPC happenings provided great fodder for HPCwire news coverage and commentary. For this final issue of 2007, editor Michael Feldman takes a look at some of the stories and developments that caught his attention.
By any measurement, 2007 was a miserable time for the company. This week's revelation of the Barcelona problem is just the latest setback in a year that the company would like to forget.
Revenue growth for high performance computing servers continues to outpace the overall server market. According to IDC, HPC server revenue grew 8.8 percent in the third quarter amid an overall server growth of only 0.5 percent. What does it all mean?
At SC07, editor Michael Feldman spent some quality time with NEC, gathering some additional information about the new SX-9 supercomputer and the company's overall HPC strategy.
One of Microsoft's challenges in the high performance computing realm will be overcoming some of the anti-Windows zealotry of the Linux HPC community.Editor Michael Feldman takes a look at what the company is up against.
The supercomputing conference season is merging into the holiday shopping season and both are starting earlier every year. SC07 doesn't officially begin until next week, but a bunch of vendors decided to get a jump on the festivities by pre-announcing some of their upcoming offerings.
Traditions die hard at NEC. At a time when vector computers are being forced into smaller and smaller niches, the company has introduced its next generation vector supercomputer, the SX-9. While vector systems may not be extinct, they're definitely on the endangered species list.
The idea of general-purpose computing on graphics processing units (GPGPU) continues to capture the imagination of the HPC community. But the three big players -- Intel, NVIDIA and AMD -- all have their ideas on how this new technology should play out.
The lure of green computing has launched a thousand marketing campaigns, but are HPC users buying it? Editor Michael Feldman takes a look at what may be holding back the green tide in high performance computing.
Sun Microsystems' recent acquisition of the Lustre file system and the associated Cluster File Systems (CFS) resources has caused less gnashing of teeth than one might have suspected. For the time being, Sun has managed to convince the Lustre community that its intentions are honorable.
The combination of quad-core Opterons and DDR Infiniband is re-landscaping the HPC terrain and is propelling the largest clusters to the top of the high performance heap. A rash of recent announcements of big system purchases suggests good times ahead for HPC cluster vendors. Or does it?
As Intel and AMD take a break from beating each other about the quads, this week we'll turn our attention to software -- specifically, parallel programming. Yes, multicore processors, GPUs and FPGAs are all the rage; but without applications to run on them, they're just pretty etchings.
Every year we let the HPCwire readership decide which are the most innovative and successful organizations, products and programs in the HPC industry. This time around we're going to do it a little differently. We've set up a short web survey that makes it super-easy to submit your nominations.
High-end computing aficionados had plenty of entertainment this week. In San Francisco, the Intel Developer Forum offered a smorgasbord of technology talks about all things Intel. And in New York, the HPC on Wall Street conference focused on the financial industry's obsession with automated trading and low latency. Editor Michael Feldman recaps some of the bi-coastal festivities.
AMD's public relations blitz for its new quad-core processors is winding down now. While the impact of the latest Opterons in the overall server market will take some time to develop, their effect in the HPC universe will be almost instantaneous.
Will AMD find true happiness in Barcelona? Will Xeon break Opteron's heart? What evil lurks beneath the Front Side Bus? If this sounds like the premise for some weird, high-tech soap opera, that's because it is. The Intel-AMD feud has been going on for over 20 years and the participants show no signs of reconciliation.
As the formal introduction of AMD's new quad-core "Barcelona" processor approaches, the folks at Intel are trying to grab the limelight with a few well-timed announcements of their own. The fun never stops in x86-land.
If you thought computing was just getting interesting with four cores, what happens when the chipmakers start delivering 100-core chips with multiple types of processing units? Although the multicore revolution is just starting, some are already thinking about what comes next.
As AMD prepares to do battle with Intel in the quad-core arena, it's faced with an uncomfortable reality: Intel is about to jump to 45nm, the next process technology level, with its Xeon processors, just as AMD is pushing its 65nm Opterons out the door.
On Wednesday, the National Science Foundation (NSF) announced the award recipients for two highly coveted petascale supercomputers. But questions are being raised about the validity of NSF's proposal review process.
As computer vendors apply themselves to the task of unleashing parallel computing, it's hard not to see a certain convergence of ideas and approaches. At least your favorite editor thinks so.
Performance is so yesterday. Productivity is the new game. But what's next? Editor Michael Feldman offers his thoughts on where this is all leading.
With last month's announcement of the Constellation System, Sun officially re-entered the elite realm of high-end supercomputing. HP might not be far behind. Why the sudden interest in a business segment with little prospect for growth?
The new Top500 list is out. But how useful is it? Editor Michael Feldman talks about a few things he'd like to see added to the list.
For high performance computing, 2007 has already been an event-filled year and it's only half over. Editor Michael Feldman recaps some of the more significant news since January.
As a runner-up to the much larger Supercomputing Conference held in November, the International Supercomputing Conference (ISC) in Dresden, Germany is a convenient platform for delivering mid-year HPC product announcements and company happenings. There was plenty to go around this year. Editor Michael Feldman looks at some of the more noteworthy news delivered at the event.
If you want to know where high performance computing is headed, just follow the money. In particular, look at how aggressively Wall Street is applying advanced computing infrastructure in their quest to expand profits.
PeakStream Dissolution Shines Spotlight on Stream Computing
Post Date: June 14, 2007 @ 9:00 PM, Pacific Daylight Time
Blog: From the Editor
With the buzz still in the air about Google's acquistion of PeakStream, editor Michael Feldman takes one more look at the ramifications of the transaction. He also gets some feedback from the CEO of RapidMind, the last vendor standing for high-level stream computing.
This week we were reminded how relatively minor events at big IT companies can produce serious consequences in the HPC community. For example, by slipping the delivery of its low-end quad-core Opteron, AMD sent Cray to the the land of the almost-profitable.Meanwhile, Google used some pocket change to make PeakStream disappear.
Last week, IDC released a report that projects a rather healthy future for InfiniBand adoption. While the interconnect has represented the premier fabric for HPC clusters, applications in other IT sectors are beginning to discover that high peformance and low-latency communication are not just for supercomputing.
In 2007, general-purpose computation on GPUs is still the Rodney Dangerfield of HPC. Companies like NVIDIA want to change that. Recently, Andy Keane, general manager of NVIDIA's GPU computing group, briefed me on where the company stands today with their GPGPU effort and gave me a hint about where they're headed.
One of the dark sides to globalization is its disruptive effect on labor markets. In the pursuit to maximize corporate profits, high-tech workers in the United States are being squeezed by foreign labor competition. Within the past few years, the H-1B visa worker program has become a symbol of what's wrong with U.S. policy in dealing with globalized labor markets.
The age of multicore architectures necessitates that the age of parallel programming happens concurrently. To make sure this occurs, the hardware and software community are going to have to collaborate like never before. There are signs that the industry is moving in this direction.
In the competition of HPC accelerators, ClearSpeed's coprocessors must battle mass-produced GPUs, Cell processors, and FPGAs. Swimming against the current of commodity solutions is a risky strategy. Does ClearSpeed have the right stuff?
The multicore phenom is changing the way people think about system design. If tricked out multicore SMP machines can replace low-end cluster systems, what will it mean when manycore arrives? Editor Michael Feldman considers some of the possibilities.
Intel managed to keep things interesting at their semi-annual developer forum even though the chipmaker is currently between product cycles. The company talked up some of their new technology, including their 'Larrabee' initiative, and offered some early performance results for the upcoming 45nm Penryn processors.
The interest in the general-purpose computation on GPUs (GPGPU) is at an all-time high. Is it for real or just hype? AMD, NVIDIA, PeakStream and others are putting their stakes in the ground and betting that stream computing will be the next big thing.
What's missing in high performance computing today and where it's going depends on which part of the HPC elephant you're touching. This week, Editor Michael Feldman highlights three feature articles whose authors have rather different perspectives.
The recent approval of the 2007 High Performance Computing R&D Act by the House is good news for the HPC community. Editor Michael Feldman takes a look the background of the bill, its chances in the Senate, and its significance for federal agencies should it become law.
If you've been reading this publication for any length of time, you already realize that HPC is changing. But do we need to change what it stands for? Editor Michael Feldman takes a look at the Performance versus Productivity debate.
When will the network actually be the computer? Editor Michael Feldman talks about some of the forces at work that keep the PC model of computing chugging along at the expense of ubiquitous remote computing. He also offers his thoughts on how these same influences are driving high performance computing models.
On Monday, Lightfleet Corporation unveiled its Corowave optical interconnect technology, which provides a wireless and switchless inter-processor interconnect. Just gee-whiz technology or a practical way to parallelize processor communication?Editor Michael Feldman takes a look.
With all the talk of hafnium transistors and teraflop processors, lately Intel has been beating AMD to the news cycle. Perhaps in response to that, last week AMD demonstrated it's own teraflop machine, which contains the company's soon-to-be-released R600 stream computing graphics cards.
As we look into the future of computing, massively parallel processing seems destined to become the dominant model. How this transition is going to occur is the subject of much controversy. Editor Michael Feldman takes a look at a recent report on parallel computing and offers some perspective on the subject.
Because high performance computing lives on the leading edge of information technology, predicting the path of HPC is like forecasting the future of the future. When Cray Research and CDC began selling supercomputers with custom processors in the early '70s, it probably seemed inconceivable that in three decades most high performance computing would be done on the descendants of PC chips. Only using the rear-view mirror of the present can we see that it was all inevitable.
In the past couple of weeks it seems like we've been inundated by a large number of announcements about "breakthrough" semiconductor/microprocessor technology. While providing entertainment for the geekdom, all the new gadgetry can't be as good as it sounds. Can it? Editor Michael Feldman attempts to separate some of the wheat from the chaff.
The Holy Grail of a single parallel programming language for HPC may no longer be desirable. The dichotomy between high-end supercomputing and mainstream high performance computing may mean that different language models will be required for each environment.
Last Friday, Intel demonstrated x86 processors with twice the transistor density of its current designs. But the ramifications of the company's new 45nm process technology may extend beyond just another Moore's Law cycle.
Sun had a stellar week -- AMD, not so much. Editor Michael Feldman recaps the big Sun-Intel hookup and also highlights a startup company developing a many-core processor and the new supercomputing center for Wyoming.
With this week's announcement of the reorganization and expansion of Tabor Communications, HPCwire and our sister publication, GRIDtoday, will begin to offer a broader view of the high performance computing and Grid domains, respectively. Our new company-wide focus on High Productivity Computing means that HPCwire will be providing news and analysis of the "old" high performance computing sector from an expanded perspective.
Boeing and Procter & Gamble were two of the industry recipients for the new DOE supercomputer allocations as part of a greatly expanded Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program for 2007.
The x86 processor design represents not only the most dominant architecture in high performance computing, but also in the rest of the computer industry. Editor Michael Feldman recaps some of the reasons for its success and wonders when and how the reign of x86 will come to an end.
In quieter times, sounding the bell of funding big science with big systems tends to resonate farther than when ears are already burning with sour economic and even national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.