SGI was awarded a contract worth $30,750,000 to supply the Air Force Research Laboratory (AFRL) with a 3.9 petaflops SGI ICE X supercomputer. This is the second time in the last two months that SGI has inked a major deal with the Department of Defense (DoD) for its ICE product. Both awards were allocated through Read more…
The US Army Research Laboratory is getting $500,000 and one billion hours of supercomputing time to study the inner workings of internal combustion engines. The award was granted by the Department of Defense’s High Performance Computing Modernization Program (HPCMP) Frontier Project, now in its second year. The Army Research Lab will receive $100,000 per year Read more…
The Maui High Performance Computing Center is eager to proceed with the Maui Solar Initiative, since it was determined that the 1.5-megawatt solar farm would not significantly impact the environment.
The GSAs FedRAMP program has cloud providers looking to receive government endorsement.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/SC12_logo_small.jpg” alt=”” width=”137″ height=”74″ />The upcoming Super-computing Conference (SC12) may not turn out to be the blow-out high performance computing hullabaloo it normally is. The recent GSA scandal involving overzealous spending at one of their conferences a couple of years ago has precipitated new federal policy that is forcing government labs to abandon their exhibits and cutback attendance at the world’s largest supercomputing event.
The Department of Defense has announced a cloud computing strategy that aligns the agency with Federal efficiency standards. It details the transition from traditional IT services, including methods to promote adoption, establish an enterprise cloud infrastructure and consolidate datacenter resources. Beyond technical details, the program also aims to overcome any cultural challenges associated with migration to cloud technology.
Federal R&D money could be an easy target for cost-cutting with latest legislation.
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the NC State effort to overcome the memory limitations of multicore chips; the sale of the first-ever commercial quantum computing system; Cray’s first GPU-accelerated machine; speedier machine learning algorithms; and the connection between shrinking budgets and increased reliance on modeling and simulation.
High-performance computing (HPC) isn’t restricted to computer rooms. It is also found “embedded” within expensive gadgets. For example, it is at your local hospital inside the CAT and MR scanners. It is inspecting new semiconductors. It is inside defense RADAR and signals intelligence platforms. In fact, the market for embedded HPC is thought to be about the same size as the market for supercomputers.
The US Defense Advanced Research Projects Agency has selected four “performers” to develop prototype systems for its Ubiquitous High Performance Computing (UHPC) program. According to a press release issued on August 6, the organizations include Intel, NVIDIA, MIT, and Sandia National Lab. Georgia Tech was also tapped to head up an evaluation team for the systems under development.