Tag: high performance computing
Traditional research and Big Data apps are increasingly run on the same HPC system as lines between their computational requirements blur and demand for dual-use capability grows, said Bob Braham, SGI CMO. He pointed to last month’s ramp up of the latest SGI supercomputer at the Earthquake and Volcano Information Center at the University of Tokyo’s Earthquake Read more…
Numascale offers a price breaker for shared memory systems by offering integration of a simple add-on card to commodity servers. The hardware is now deployed in system with up to more than 1700 cores and the memory addressing capability is virtually unlimited. The technology has a set of interesting advantages that will catch the interest of innovative developers.
Microsoft’s Steve Ballmer announced a major restructuring effort today in a company memo in which he outlined the company’s plan to consolidate its many disparate divisions under very few large thematic umbrellas. While the technical computing group has already been part of its Azure cloud division for some time, the act of….
According to Lustre founder and current CEO of Parallel Scientific, languages that buck the mainstream trend, including Haskell, could find further inroads into HPC as models, data sizes and overall complexity grow. We spoke with Braam at ISC and discovered that like Python, there are…
Worldwide sales of HPC servers perked up by 5.3 percent during the first quarter of 2013, to $2.5 billion, industry watchers at IDC reported last week. The increase was driven by sales of small and midrange HPC systems, as sales of high-end supercomputers declined.
<img src=”http://media2.hpcwire.com/hpcwire/light-speed.gif” alt=”” width=”94″ height=”94″ />The Intelligence Advanced Research Projects Activity (IARPA) is putting out some RFI feelers in hopes of pushing new boundaries with an HPC program. However, at the core of their evaluation process is an overt dismissal of benchmarks, including floating operations per second (FLOPS).
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Mellanox_logo_small.bmp” alt=”” width=”101″ height=”86″ />With the rollout of high performance, lossless Ethernet products over the last few years, there were more than a few analysts predicting the slow retreat of InfiniBand. But thanks to a peculiar confluence of technology roadmaps, a payoff in some investments made by Mellanox, and a pent-up demand for server and storage deployment now being alleviated by Intel’s Romley platform, InfiniBand is having a big year.
The researchers and medical professionals conducting the world’s first FDA-approved personalized medicine clinical trial for pediatric cancer are using a unique HPC and cloud-based IT infrastructure designed by Dell to accelerate genetic analysis and identification of targeted treatments for patients. As part of the infrastructure, trial-specific portals and a high-speed, grid-based architecture are being implemented to facilitate the rapid transfer of genomic and relevant clinical data between collaborators in the trial.
The 26th International Supercomputing Conference – ISC’11 – will take place in Hamburg, Germany, from June 19 – 23, 2011. ISC is a key global conference and exhibition for high performance computing, networking and storage. Over 2,000 attendees expected throughout the week, an even more extensive conference program than the previous year, and about 150 leading exhibitors of supercomputing, software, storage, networking and infrastructure technologies, ISC’11 is on track to draw its highest attendance in the history of the event. <br id=”tinymce” class=”mceContentBody ” />
Lost in the flotilla of vendor news at the Supercomputing Conference (SC11) in Seattle last month was the announcement of a new directives-based parallel programming standard for accelerators. Called OpenACC, the open standard is intended to bring GPU computing into the realm of the average programmer, while making the resulting code portable across other accelerators and even multicore CPUs.