The global distributed computing system known as the Worldwide LHC Computing Grid (WLCG) brings together resources from more than 150 computing centers in nearly 40 countries. Its mission is to store, distribute and analyze the 25 petabytes of data generated each year by the Large Hadron Collider (LHC), based out of the European Laboratory for Particle Physics (CERN) in Geneva, Switzerland. Read more…
As an excellent conductor of heat and electricity, graphene is a promising electronics substrate, but it can’t be switched on and off like silicon can. With no solution in sight, a team of UC Riverside researchers has taken a completely new approach.
Bigger is not always better in the world of supercomputing. While data scientists almost always desire more computational throughput, the key question is how best to deliver that: through traditional, power-hungry X64 processors, or through the cheap, low-power ARM processors that drive smartphones and tablets? The answer is not always clear.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company….
As NCSA’s Blue Waters supercomputer approaches full service status, we thought it would be appropriate to see how the machine was built.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/Gerhard_Wellein_small.jpg” alt=”” width=”95″ height=”85″ />At this June’s International Supercomputing Conference (ISC’13) in Leipzig, Germany, Gerhard Wellein will be delivering a keynote entitled, Fooling the Masses with Performance Results: Old Classics & Some New Ideas. HPCwire caught up with Wellein and asked him to preview some of the themes of his upcoming talk and expound on his philosophy of programming for performance in the multicore era.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/FirePro_S10000_Angle_black_180.jpg” alt=”” width=”98″ height=”85″ />AMD is launching its most powerful graphics card yet: the dual-GPU FirePro S10000 promises 5.91 teraflops of peak single precision and 1.48 teraflops of peak double precision floating point performance. And with AMD’s “Graphics Core Next” (GCN) architecture under the hood, the S10000 can deliver compute and graphics processing simultaneously.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/AMD_ARM_logo_200x.jpg” alt=”” width=”121″ height=”42″ />On Monday, AMD announced it is adding ARM-based Opterons to its portfolio, the first non-x86 server chips in the company’s history. The new processors, due out in 2014, will use 64-bit ARM SoCs on top of its SeaMicro Freedom Fabric technology, and will be aimed at the datacenter and cloud space.
On Monday, AMD announced it is adding ARM-based Opterons to its portfolio, the first non-x86 server chips in the company’s history. The new processors, due out in 2014, will use 64-bit ARM SoCs on top of its SeaMicro Freedom Fabric technology, and will be aimed at the datacenter and cloud space.
Dell hands over its Calxeda ARM-based server platform to the Apache community.