Sectors » Government

MeerKAT Telescope Reveals New Details of Universe

Jul 20, 2016 |

This week, the world saw the first images captured by the MeerKat array, which made its debut as one of the most powerful telescopes of its kind. Using 16 out of a planned 64 dishes, the telescope array recorded the radio signals coming from a small area of sky comprising less than .01 percent of Read more…

Lengau: Global Grand Challenges Through an African Lens

Jul 19, 2016 |

South African Council for Scientific and Industrial Research (CSIR) Program Director Kagiso Chikane recently welcomed 100 guests to the Centre for High Performance Computing (CHPC) in Cape Town for the dedication of the fastest computer on the African continent. “Lengau,” which means “Cheetah” in the African Setswana language, ranked 121 on the June TOP500 list of the world’s fastest supercomputers.

Briefing Alert: SoftBank will Purchase ARM Ltd for $32B

Jul 18, 2016 |

ARM Ltd has agreed to be bought by Japanese technology company SoftBank for $32B according to both companies. ARM-based chips are already dominant players in the mobile computing market and recently efforts to push ARM processors into servers, including HPC, have gained momentum. For example, Japan’s next flagship supercomputer, the post K Computer, will be Read more…

DOE’s ESnet Marks 30 Years of Networking Leadership With Interactive Timeline

Jul 8, 2016 |

July 8 — Thirty years ago, the idea of a unified network that allowed people to easily connect with colleagues, facilities and data centers was still a dream – the World Wide Web was still three years away. At the time, networking was confined to specialized networks created to support targeted research communities. One of Read more…

Trinity Wrestles with Knights Landing Programming Challenge with COE

Jul 5, 2016 |

Seventy-one years ago, on July 16, 1945, an incredible explosion lit up the New Mexico night sky. This was the Trinity Test, the world’s first nuclear detonation, and it marked the beginning of the Nuclear Age. It also ushered in the age of supercomputers, which essentially began with weapons science at Los Alamos National Laboratory (LANL). Now a new Trinity, a next generation Cray XC supercomputer is about to take center stage to help the national security labs achieve their primary mission – to provide the nation with a safe, secure and effective nuclear deterrent.

XSEDE16 Program Emphasizes Inclusion, Says Chair Kelly Gaither

Jun 29, 2016 |

Ahead of XSEDE16, which takes place in Miami from July 17-21, HPCwire reached out to conference chair Dr. Kelly Gaither to get the inside track on this year’s program, her work in scientific visualization and her commitment to increasing diversity in HPC. Gaither serves as the director of Visualization at the Texas Advanced Computing Center at The University of Texas at Austin. She has over 30 refereed publications in fields ranging from computational mechanics to supercomputing applications to scientific visualization. Over the past ten years, she has actively participated in conferences related to her field and has given numerous invited talks.

SDSC to Participate in Obama Administration’s Smart Manufacturing Initiative

Jun 27, 2016 |

June 27 — The San Diego Supercomputer Center (SDSC) at the University of California San Diego will participate in a comprehensive national initiative announced by the White House this week to spur advances in digital process controls to improve the efficiency of U.S. manufacturing. UC Berkeley, UC Irvine, and UC Los Angeles are also participating Read more…

MANGO Project Tackles Power, Performance and Predictability for Future HPC

Jun 27, 2016 |

Under the H2020 High Performance Computing call (Towards exascale high performance computing) MANGO project was awarded funding of 5.8 million euro for three years of research till October 2018. Coordinated by prof. Jose Flich from University of Valencia, consortium includes École polytechnique fédérale de Lausanne, Politecnico di Milano, University of Zagreb, Centro Regionale Information Communication Read more…

China Debuts 93-Petaflops ‘Sunway’ with Homegrown Processors

Jun 19, 2016 |

You may have heard the rumors, but now it’s official: China has built and deployed a 93 petaflops LINPACK (125 petaflops peak) Chinese-made supercomputer at its Wuxi Supercomputer Center, near Shanghai. A few days ago HPCwire received an advance copy of a report on the new system prepared by TOP500 author Jack Dongarra detailing the feeds and speeds and Read more…

NVIDIA Debuts PCIe-based P100; Broadens Aim at CPU Dominance

Jun 19, 2016 |

NVIDIA seems to be mounting a vigorous effort to dethrone the CPU as the leader of the processor pack for HPC and demanding datacenter workloads. That’s a tall order. Introduction at ISC 2016 of a PCIe-based version of is new Tesla P100 card is one element in the strategy. It should ease the upgrade path Read more…

Heading into ISC16, OpenHPC Releases Latest Stack with 60-plus Packages

Jun 16, 2016 |

SC15 was sort of a muted launch party for OpenHPC – the nascent effort to develop a ‘plug-and-play’ software framework for HPC. There seemed to be widespread agreement the idea had merit, not a lot of knowledge of details, and some wariness because Intel was a founding member and vocal advocate. Next week, ISC16 will mark the next milestone for OpenHPC, which has since grown into a full-fledged Linux Foundation Collaborative Project and today released version 1.0.1 of OpenHPC (build and test tools).

Paul Messina Shares Deep Dive Into US Exascale Roadmap

Jun 14, 2016 |

Ahead of ISC 2016, taking place in Frankfurt, Germany, next week, HPCwire reached out to Paul Messina to get an update on the deliverables and timeline for the United States’ Exascale Computing Project. The ten-year project has been charged with standing up at least two capable exascale supercomputers in 2023 as part of the larger National Strategic Computing Initiative launched by the Obama Administration in July 2015.

How Lawrence Livermore Is Facing Exascale Power Demands

Jun 9, 2016 |

The old adage “you cannot improve what you do not measure” is fresh again in the age of ubiquitous data. When considering the challenges of exascale computing, power is right at the top of the list and the major leadership-class centers want to make sure they’re doing everything they can to manage the demands of power today – which can run as high as 10 MW at peak for the largest machines – and in the coming exascale era, when the number could be three times that high. At loads of this magnitude, the largest HPC facilities need to have all the relevant power data within arm’s reach.

Intel Xeon E7 Balloons In-memory Capacity, Targets Real-Time Analytics

Jun 8, 2016 |

Who crunches more data faster, wins. It’s this drive that cuts through and clarifies the essence of the evolutionary spirit in the computer industry, the dual desire to get to real time with bigger and bigger chunks of data. The locomotive: HPC technologies adapted to enterprise mission-critical data analytics. With its memory capacity of up Read more…

NITRD Proposed $4.5B Budget In the Spotlight

Jun 7, 2016 |

Two weeks ago Rep. Darin LaHood, (R-Illinois), sponsor of the 2017 NITRD funding bill – The Networking and Information Technology Research and Development Program – issued a formal statement championing the proposed budget. NITRD, of course, is the nation’s primary source of federally funded work on advanced information technologies (IT) in computing, networking, and software…

TACC Director Lays Out Details of 2nd-Gen Stampede System

Jun 2, 2016 |

With a $30 million award from the National Science Foundation announced today, the Texas Advanced Computing Center (TACC) at The University of Texas at Austin (UT Austin) will stand up a second-generation Stampede system based on Dell PowerEdge servers equipped with Intel “Knights Landing” processors, next-generation Xeon chips and future 3D XPoint memory.

Call for Papers Issued for ISAV 2016 Workshop

Jun 1, 2016 |

June 1 — The considerable interest in the HPC community regarding in situ analysis and visualization is due to several factors. First is an I/O cost savings, where data is analyzed/visualized while being generated, without first storing to a file system. Second is the potential for increased accuracy, where fine temporal sampling of transient analysis Read more…

India Readies First of 70-Plus Supercomputers for 2017

May 24, 2016 |

India is set to stand up an indigenously-built supercomputer next year, according to a Times of India report. The Centre for Development of Advanced Computing (C-DAC) will be overseeing the construction of this “PARAM” series system, which will be the first of more than 70 systems slated to be built under India’s National Supercomputing Mission. Read more…

ORNL Researchers Create Framework for Easier, Effective FPGA Programming

May 24, 2016 |

Programmability and portability problems have long inhibited broader use of FPGA technology. FPGAs are already widely and effectively used in many dedicated applications (accelerated packet processing, for example), but generally not in situations that require ‘reconfiguring’ the FPGA to accommodate different applications. A group of researchers from Oak Ridge National Laboratory is hoping to change that.

Ace Computers Rolls Out Big Data HPC Clusters for the Military and Government

May 24, 2016 |

May 24 — Ace Computers and its affiliate Ace Technology Partners just introduced Big Data HPC clusters designed for the unique demands of public sector applications. For federal, state, and local governments and military organizations such as the Air Force, Army, Navy, and Marines, the ability to organize and analyze disparate data quickly and accurately Read more…