Sectors » Government

XSEDE16 Program Emphasizes Inclusion, Says Chair Kelly Gaither

Jun 29, 2016 |

Ahead of XSEDE16, which takes place in Miami from July 17-21, HPCwire reached out to conference chair Dr. Kelly Gaither to get the inside track on this year’s program, her work in scientific visualization and her commitment to increasing diversity in HPC. Gaither serves as the director of Visualization at the Texas Advanced Computing Center at The University of Texas at Austin. She has over 30 refereed publications in fields ranging from computational mechanics to supercomputing applications to scientific visualization. Over the past ten years, she has actively participated in conferences related to her field and has given numerous invited talks.

SDSC to Participate in Obama Administration’s Smart Manufacturing Initiative

Jun 27, 2016 |

June 27 — The San Diego Supercomputer Center (SDSC) at the University of California San Diego will participate in a comprehensive national initiative announced by the White House this week to spur advances in digital process controls to improve the efficiency of U.S. manufacturing. UC Berkeley, UC Irvine, and UC Los Angeles are also participating Read more…

MANGO Project Tackles Power, Performance and Predictability for Future HPC

Jun 27, 2016 |

Under the H2020 High Performance Computing call (Towards exascale high performance computing) MANGO project was awarded funding of 5.8 million euro for three years of research till October 2018. Coordinated by prof. Jose Flich from University of Valencia, consortium includes École polytechnique fédérale de Lausanne, Politecnico di Milano, University of Zagreb, Centro Regionale Information Communication Read more…

China Debuts 93-Petaflops ‘Sunway’ with Homegrown Processors

Jun 19, 2016 |

You may have heard the rumors, but now it’s official: China has built and deployed a 93 petaflops LINPACK (125 petaflops peak) Chinese-made supercomputer at its Wuxi Supercomputer Center, near Shanghai. A few days ago HPCwire received an advance copy of a report on the new system prepared by TOP500 author Jack Dongarra detailing the feeds and speeds and Read more…

NVIDIA Debuts PCIe-based P100; Broadens Aim at CPU Dominance

Jun 19, 2016 |

NVIDIA seems to be mounting a vigorous effort to dethrone the CPU as the leader of the processor pack for HPC and demanding datacenter workloads. That’s a tall order. Introduction at ISC 2016 of a PCIe-based version of is new Tesla P100 card is one element in the strategy. It should ease the upgrade path Read more…

Heading into ISC16, OpenHPC Releases Latest Stack with 60-plus Packages

Jun 16, 2016 |

SC15 was sort of a muted launch party for OpenHPC – the nascent effort to develop a ‘plug-and-play’ software framework for HPC. There seemed to be widespread agreement the idea had merit, not a lot of knowledge of details, and some wariness because Intel was a founding member and vocal advocate. Next week, ISC16 will mark the next milestone for OpenHPC, which has since grown into a full-fledged Linux Foundation Collaborative Project and today released version 1.0.1 of OpenHPC (build and test tools).

Paul Messina Shares Deep Dive Into US Exascale Roadmap

Jun 14, 2016 |

Ahead of ISC 2016, taking place in Frankfurt, Germany, next week, HPCwire reached out to Paul Messina to get an update on the deliverables and timeline for the United States’ Exascale Computing Project. The ten-year project has been charged with standing up at least two capable exascale supercomputers in 2023 as part of the larger National Strategic Computing Initiative launched by the Obama Administration in July 2015.

How Lawrence Livermore Is Facing Exascale Power Demands

Jun 9, 2016 |

The old adage “you cannot improve what you do not measure” is fresh again in the age of ubiquitous data. When considering the challenges of exascale computing, power is right at the top of the list and the major leadership-class centers want to make sure they’re doing everything they can to manage the demands of power today – which can run as high as 10 MW at peak for the largest machines – and in the coming exascale era, when the number could be three times that high. At loads of this magnitude, the largest HPC facilities need to have all the relevant power data within arm’s reach.

Intel Xeon E7 Balloons In-memory Capacity, Targets Real-Time Analytics

Jun 8, 2016 |

Who crunches more data faster, wins. It’s this drive that cuts through and clarifies the essence of the evolutionary spirit in the computer industry, the dual desire to get to real time with bigger and bigger chunks of data. The locomotive: HPC technologies adapted to enterprise mission-critical data analytics. With its memory capacity of up Read more…

NITRD Proposed $4.5B Budget In the Spotlight

Jun 7, 2016 |

Two weeks ago Rep. Darin LaHood, (R-Illinois), sponsor of the 2017 NITRD funding bill – The Networking and Information Technology Research and Development Program – issued a formal statement championing the proposed budget. NITRD, of course, is the nation’s primary source of federally funded work on advanced information technologies (IT) in computing, networking, and software…

TACC Director Lays Out Details of 2nd-Gen Stampede System

Jun 2, 2016 |

With a $30 million award from the National Science Foundation announced today, the Texas Advanced Computing Center (TACC) at The University of Texas at Austin (UT Austin) will stand up a second-generation Stampede system based on Dell PowerEdge servers equipped with Intel “Knights Landing” processors, next-generation Xeon chips and future 3D XPoint memory.

Call for Papers Issued for ISAV 2016 Workshop

Jun 1, 2016 |

June 1 — The considerable interest in the HPC community regarding in situ analysis and visualization is due to several factors. First is an I/O cost savings, where data is analyzed/visualized while being generated, without first storing to a file system. Second is the potential for increased accuracy, where fine temporal sampling of transient analysis Read more…

India Readies First of 70-Plus Supercomputers for 2017

May 24, 2016 |

India is set to stand up an indigenously-built supercomputer next year, according to a Times of India report. The Centre for Development of Advanced Computing (C-DAC) will be overseeing the construction of this “PARAM” series system, which will be the first of more than 70 systems slated to be built under India’s National Supercomputing Mission. Read more…

ORNL Researchers Create Framework for Easier, Effective FPGA Programming

May 24, 2016 |

Programmability and portability problems have long inhibited broader use of FPGA technology. FPGAs are already widely and effectively used in many dedicated applications (accelerated packet processing, for example), but generally not in situations that require ‘reconfiguring’ the FPGA to accommodate different applications. A group of researchers from Oak Ridge National Laboratory is hoping to change that.

Ace Computers Rolls Out Big Data HPC Clusters for the Military and Government

May 24, 2016 |

May 24 — Ace Computers and its affiliate Ace Technology Partners just introduced Big Data HPC clusters designed for the unique demands of public sector applications. For federal, state, and local governments and military organizations such as the Air Force, Army, Navy, and Marines, the ability to organize and analyze disparate data quickly and accurately Read more…

Barcelona Supercomputing Center Develops New Bioinformatics Tool Against HIV

May 11, 2016 |

Viruses’ natural mutational agility has long been problematic for established therapies. Determining a therapeutic compound’s effectiveness against a mutated viral pathogen mostly entails empirical screening of the mutated virus with compounds to gauge effectiveness. This week researchers from the Barcelona Supercomputing Center and IrsiCaixa, the Catalan AIDS Research Institute, reported developing a bioinformatics method to Read more…

TGAC Installs largest SGI UV 300 Supercomputer for Life Sciences

May 11, 2016 |

Two weeks ago, The Genome Analysis Centre (TGAC) based in the U.K. turned on the first of two new SGI UV300 computers. Next week, or thereabouts, TGAC will bring a second identical system online. Combined with its existing SGI UV2000, TGAC will have the largest SGI system dedicated to life sciences in the world. The upgrade will allow TGAC to significantly shorten the time required to assemble wheat genomes, a core activity in TGAC efforts to enhance worldwide food security.

Technology Test Drive: PNNL Offers Exploratory Licenses

May 10, 2016 |

Signing a two-page agreement and paying just $1,000 can get U.S. companies an opportunity to test drive promising technologies through a new, user-friendly commercialization option being offered at the Department of Energy’s Pacific Northwest National Laboratory. PNNL is the only DOE lab to offer this option, called an exploratory license, which gives companies six months Read more…

Météo-France Fires Up Bull Supercomputer Running on ‘Broadwell’ Processors

May 9, 2016 |

This spring Météo-France, the national meteorological service for France and its overseas territories, turned on its ‘new’ supercomputer, a long planned upgrade delivered by Bull/Atos that doubled the performance of its predecessor (already on the Top500 list), significantly reduced power consumption, and is allowing Météo-France to increase the resolutions of its weather and climate models. Read more…

Intel Weighs In on NSCI

May 5, 2016 |

For the last 60 years, high-performance computing (HPC) has been instrumental in chipping away at the world’s toughest challenges such as disease control, climate research, and energy efficiency. Organizations in many industries, including oil and gas, financial services, pharmaceutical, and life sciences, as well as in academia and government, have drawn on this technology to Read more…