Visit additional Tabor Communication Publications
The slow road to fast networks.
Ellison and company is all about business computing.
Upgraded machine will sport 192 FPGAs and nearly a terabyte of memory.
New CEO takes company back to the future.
China and Singapore gear up petascale efforts.
Will the computer industry lead us out of the economic wilderness?
A new beginning? Not exactly.
Verari and TotalView Technologies: HPC vendor churn continues.
Highlights and lowlights of the year in HPC.
Next-generation supercomputer project gets a reprieve.
Server maker looking to "restructure the business."
Intel's GPU work stoppage gets scrutinized.
Exascale computing. What is it good for? Certainly not to solve problems that need solving today.
IBM Cat Brain Simulation Research Called a "PR Stunt"
Post Date: November 24, 2009 @ 3:20 PM, Pacific Standard Time
Blog: From the Editor
Has Big Blue coughed up a hair ball?
The roadmap not taken.
NVIDIA connects the GPGPU dots.
Addison and Michael discuss the new TOP500 list. They also cover Justin Rattner's discussion on Larrabee and the new systems announced by Cray and SGI.
Buying Teslas by the bushel.
Once more unto the breach.
Addison and Michael discuss the new Cray and Spectra Logic products unveiled this week. They also offers their thoughts on Intel's $1.25B settlement with AMD and Japan's big pull-back in supercomputer funding.
Cloud computing is swallowing the world and taking HPC with it.
Latest GPU-equipped super hits 1.2 peak petaflops.
As HPC embraces GPUs, will reconfigurable computing fade away?
Solid state storage gets its second wind.
Podcast: Bright Computing Debut; Kraken Super Hits a Petaflop; Obama Awards IBM Blue Gene
Post Date: October 09, 2009 @ 2:26 PM, Pacific Daylight Time
Blog: From the Editor
Michael and Addison talk about the latest supercomputer to reach a petaflop and discuss how the IBM Blue Gene garnered a presidential award. In addition, ClusterVision co-founder Matthijs van Leeuwen tells us what's behind the launch of Bright Computing, a new cluster management software vendor.
Can Islamic Law, supercomputers, and a co-ed university peacefully coexist?
NVIDIA toolset latches onto Visual Studio.
Company pushes the envelope for GPU computing.
Chipmaker is cooking up something big.
Larrabee, Westmere, and "microserver" chips: Intel talks up its future silicon at IDF.
Star-P technology to be folded into Microsoft's HPC effort.
The perils of ignoring human behavior when modeling reality.
A focus on low latency is giving a new breed of Ethernet switch vendors a leg up on their competition.
With a new generation of server processors in the offing, 2010 promises to be chockful of multicore goodness.
Platform Computing continues in its quest to be a one-stop shop for cluster middleware.
Another one bites the dust.
The SGI rumor mill keeps grinding along.
Green computing is about the economics of computing, not the environment.
Medvedev makes a personal pitch for more HPC.
Ian Foster says supercomputers may be faster, but clouds may be nimbler.
A recent Platform Computing survey gladdens the hearts of cloud computing proponents.
More angst about high frequency trading.
High frequency trading comes under scrutiny.
HPC drives some of the most cutting-edge science and engineering in the world, but for the most part, anonymously.
Next-generation Japanese supercomputer will rely on Fujitsu SPARC chips.
SGI's willingness to dump the NSF petaflop deal would be a return to sane business practices.
Now that QLogic has a fully populated InfiniBand product line, the company is looking to make up for lost time against the competition.
There were a couple of stories floating around the Intertubes in the past week or so that reminded me of how little we know about large classes of HPC applications.
European Vendors Offer Home-Grown Petascale Supers
Post Date: July 02, 2009 @ 6:32 PM, Pacific Daylight Time
Blog: From the Editor
As American HPC companies retrench, a new crop of European-based vendors is emerging.
The STREAM benchmark plays to one of the big strength of Intel's Nehalem architecture -- its memory performance.
Wondering what the can't-miss activities will be at ISC? Here is one man's opinion.
Intel's Nehalem gets Linpunked.
According to market research and consulting firm iSuppli, Moore's Law is going to run out of money before it runs out of technology.
NVIDIA's next-generation GPU design, the G300, may turn out to be the biggest architectural leap the graphics chip maker has ever attempted.
Amazon EC2 is still the platform of choice, but there are more clouds on the horizon.
Leaves HPC customers clutching at cores.
Don't Throw Wolfram's Baby Out with Google's Bath Water
Post Date: May 21, 2009 @ 5:59 PM, Pacific Daylight Time
Blog: From the Editor
Time to break out of the search engine mindset. Wolfram Alpha is not a Google wannabe.
NEC, Hitachi Bail on 10-Petaflop Supercomputing Project
Post Date: May 14, 2009 @ 5:23 PM, Pacific Daylight Time
Blog: From the Editor
Depressed economy undercuts Japan's petascale ambitions.
The legacy of Silicon Graphics lives on under new management.
A new NSF-funded report on computer simulation R&D worries that the US is losing its mojo in this important technology.
The quants at Paris-based bank BNP Paribas march to a different drummer.
After eight years in the wilderness, the R&D community finally has an advocate in the White House.
An IBM supercomputer is getting ready to beat trivia masters at their own game.
Jilted by IBM, Sun Microsystems has found a new suitor and this one doesn't seem to have commitment issues. But what does this relationship mean for Sun's HPC presence?
Complex event processing may be a technology whose time has come.
For most IT firms, energy efficient computing is just one more piece of the marketing pitch, but for SiCortex, it's a religion.
The SGI deathwatch is over.
While 10 Gigabit Ethernet is getting all the press, InfiniBand keeps chugging along.
NVIDIA rules the GPU computing landscape today, but the lack of a home-grown CPU companion could eventually spell trouble.
A couple of random items this week connected only by the inscrutable nature of research funding.
In a week when Cisco, IBM, Sun Microsystems, Intel and AMD were all featured prominently in the news cycle, I got the feeling that the whole industry might be on the cusp of a realignment.
In the history of HPC, commercial FPGA-based systems have been few and far between. Kuberre Systems offers up its contribution.
Last week, Mathematica inventor Stephen Wolfram announced that he would be launching a new kind of Internet search engine in May, with the not-so-modest name of Wolfram Alpha.
Although rumors of NVIDIA developing its own x86 products have been circulating for years, a comment this week by Michael Hara, the company's senior VP of investor relations, all but confirmed the GPU maker's intention to bring x86 silicon to market.
The industry's headlong rush into cloud computing is shaking up the old order, sometimes in ways even the biggest IT firms can't anticipate.
It's fascinating to read the post-mortem analysis of the economic meltdown, especially as it relates to the role quantitative analysts and their high-tech financial models played in pushing the industry off a cliff.
Last week, the Folding@home team reported that they achieved five petaflops of processing power for their popular protein folding research project.
The new Nehalem processors will push the memory wall back a bit...at least for a while.
Editing source code in the cloud may be an idea whose time has come.
In what looks like a one-company stimulus package, Intel announced that it is going to invest $7 billion in US-based chip manufacturing plants.
In case you hadn't noticed, the global economic collapse is causing more churning in the tech workforce than we've seen since the dot-com bust.
According to new reports released this month from analyst firms IDC and Tabor Research, HPC server revenue contracted in 2008, and 2009 doesn't look any better.
As promised, AMD has added a raft of new 45nm quad-core "Shanghai" Opterons to its product line. The new chips include five energy-sipping HE processors, with speeds ranging from 2.1 to 2.3 GHz, and which draw just 55 watts.
While much of the U.S. was experiencing presidential inauguration euphoria this week, most of the economic news was dismal. In particular, a lot of the big tech companies were announcing bleak quarterly financial results amid plans to scale back their operations.
If AMD's new "Fusion Render Cloud" supercomputer is going to be doing all the heavy lifting for games and HD rendering in the server, why do you need GPUs in the client?
AMD Expands Fusion Strategy with Petaflop Supercomputer
Post Date: January 13, 2009 @ 4:38 PM, Pacific Standard Time
Blog: From the Editor
Back in June 2008, I suggested Sun Microsystems could accelerate its Network.com compute grid with GPU-based nodes. Sun never did, but it looks like AMD is going to give this idea a whirl.
The hardware and software challenges of multicore/manycore CPUs have been flogged in this publication for a number of years. The assumption was that geek ingenuity would eventually power through the roadblocks. But what if that doesn't happen?
As is usual for the supercomputing world in early January, news is hard to come by. With so many academics in the community, a lot of HPC practitioners are still on their extended winter breaks. As for commercial HPC companies, they may not be so eager to return to work to confront the new economic realities they'll be facing in 2009.
Petaflops supercomputing dominated much of the HPC news in 2008, but the year also witnessed the rise of GPU-accelerated computing and the fall of Linux Networx.
For those of you who thought Intel was angling for an HPC play with its upcoming Larrabee processor family, think again.
Vendors in the HPC market might fare better in the recession than other IT sectors, but they're not immune to economic gravity.
Nevermind the cores. Just hand over the cache.
The GPGPU contingent of the high performance computing crowd got another big boost on Tuesday with the release of the first version of the OpenCL standard.
The democratization of HPC is unlikely to happen if every company and institution is forced to build and maintain multi-million dollar datacenters to house supercomputers. But there are alternatives.
In our supposedly tech-driven economy, it's common to hear about computer professionals who have lost their jobs and are unable to find new work in their field. Is the IT industry really that much at odds with its own labor market? Surprisingly, yes.
QLogic Corp. has decided to follow its own path with Quad Data Rate (QDR) InfiniBand.
Barcelona, we hardly knew ye. Today AMD launched its 45nm "Shanghai" quad-core Opterons, sending the ill-fated 65nm Barcelona chips into the microprocessor history books.
The petascale era is in full swing. Yesterday, the DOE announced that the Cray XT 'Jaguar' supercomputer at Oak Ridge has been upgraded to 1.64 peak petaflops.
It seems hardly a week passes without some news of HPC being delivered as an on-demand service. That topic includes everything from in-house grids to commercial clouds, but it's the cloud element that's grabbing the attention of the supercomputing crowd.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.