Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

Topics » Developer Tools

Google Joins Internet2, Targets Machine Learning and Data Analytics

Oct 25, 2016 |

Internet2 (I2) issued a brief announcement yesterday that Google had joined its community. The move is interesting in that it is more evidence of efforts by the big hyperscalers – AWS, Microsoft Azure, and Google – to forge links with and serve the HPC community. Among Google’s goals are gaining access to Internet2 community development Read more…

OpenHPC Pushes to Prove its Openness and Value at SC16

Oct 24, 2016 |

At SC15 last year the announcement of OpenHPC – the nascent effort to develop a standardized HPC stack to ease HPC deployment – drew a mix of enthusiasm and wariness; the latter in part because of Intel’s prominence in the group. There was general agreement that creating an open source, plug-and-play HPC stack was a good idea but concern that the initiative might not be sufficiently open. A wait and see attitude prevailed.
Heading into SC16 in a few weeks, OpenHPC expects to tangibly demonstrate it has followed through on its commitment to move quickly and remain firmly open.

SC16 Showcases Use of HPC and Cloud In Cancer Research

Oct 19, 2016 |

The effort to attack cancer with HPC resources has been growing for years. Indeed, it’s accurate to say the sequencing of the human genome was as much a tour de force of HPC as of the new DNA sequencers. Back in June, Department of Energy Secretary Ernst Moniz blogged on the effort (Supercomputers are key Read more…

ORNL’s Future Technologies Group Tackles Memory and More

Oct 13, 2016 |

“Imagine if you’ve got a Titan-size computer (27 PFlops) and it has main memory that’s partially non-volatile memory and you could just leave your data in that memory between executions then just come back and start computing on that data as it sits in memory,” says Jeffrey Vetter, group leader of the Future Technologies Group at Oak Ridge National Labs.

DOE Invests $16M in Supercomputer Technology to Advance Material Sciences

Sep 22, 2016 |

The Department of Energy (DOE) plans to invest $16 million over the next four years in supercomputer technology that will accelerate the design of new materials by combining “theoretical and experimental efforts to create new validated codes.” The new program will focus on software development that eventually may run on exascale machines. Luke Shulenburger of Sandia Read more…

New Genomics Pipeline Combines AWS, Local HPC, and Supercomputing

Sep 22, 2016 |

Declining DNA sequencing costs and the rush to do whole genome sequencing (WGS) of large cohort populations – think 5000 subjects now, but many more thousands soon – presents a formidable computational challenge to researchers attempting to make sense of large cohort datasets. No single architecture is best. This month researchers report developing a hybrid Read more…

Larry Smarr Helps NCSA Celebrate 30th Anniversary

Sep 20, 2016 |

Throughout the past year, the National Center for Supercomputing Applications has been celebrating its 30th anniversary. On Friday, Larry Smarr, whose unsolicited 1983 proposal to the National Science Foundation (NSF) begat NCSA in 1985 and helped spur NSF to create not one but five national centers for supercomputing, gave a celebratory talk at NCSA.

IBM Debuts Power8 Chip with NVLink and Three New Systems

Sep 8, 2016 |

Not long after revealing more details about its next-gen Power9 chip due in 2017, IBM today rolled out three new Power8-based Linux servers and a new version of its Power8 chip featuring Nvidia’s NVLink interconnect. One of the servers – Power S822LC for High Performance Computing (codenamed “Minsky”) – uses the new chip (Power8 with Read more…

SAVE Project to Improve HSA Energy Use Wraps Up

Sep 6, 2016 |

The three-year European SAVE project, which wraps up this week, has produced tools and technologies that can help reduce heterogeneous system architecture (HSA) energy costs by 20 percent say its organizers. SAVE is the somewhat abbreviated acronym for Self-Adaptive Virtualization-Aware High-Performance/Low-Energy Heterogeneous System Architectures, a EU collaborative research project, funded by the EU’s Seventh Framework Read more…

How Would U.S. Perform in Coding Olympics? Not Great Says Study

Aug 31, 2016 |

China’s impressive standing up of the 93 petaflops Sunway TaihuLight atop the Top500 list in June ruffled more than a few feathers in the west. Here’s yet more fodder for stirring up geo-computational acumen nervousness – a report by HackerRank, a blog that posts coding challenges, indicates the U.S. would place 28th in a Coding Olympics Read more…

Gazing into Computational Biology’s Crystal Ball

Aug 23, 2016 |

Sorting out computational biology’s future is tricky. It likely won’t be singular. First-principle, mechanistic simulation has so far proven challenging but could eventually become game changing. Meanwhile pattern recognition and matching in massive ‘omics’ datasets have been extremely productive and likely to remain dominant at present. Now, an MIT professor and colleagues write that two Read more…

ARM Unveils Scalable Vector Extension for HPC at Hot Chips

Aug 22, 2016 |

ARM and Fujitsu today announced a scalable vector extension (SVE) to the ARMv8-A architecture intended to enhance ARM capabilities in HPC workloads. Fujitsu is the lead silicon partner in the effort (so far) and will use ARM with SVE technology in its post K computer, Japan’s next flagship supercomputer planned for the 2020 timeframe. This is an important incremental step for ARM, which seeks to push more aggressively into mainstream and HPC server markets.

Container Testing Reveals ‘Memory Pressure’ on Apps

Aug 18, 2016 |

With early adopters of application container technology completing early testing in multi-tenant settings, potential performance issues are beginning to surface. Among them, according to hyper-scaler LinkedIn, is a Linux kernel feature called “control groups” used with most containers to assign resources. In an analysis based on several months of “pressure testing,” Zhenyun Zhuang, a software Read more…

Think Fast – Is Neuromorphic Computing Set to Leap Forward?

Aug 15, 2016 |

Steadily advancing neuromorphic computing technology has created high expectations for this fundamentally different approach to computing. Its strengths – like the human brain it attempts to mimic – are pattern recognition (space and time) and inference reasoning. Advocates say it will also be possible to compute at much lower power than current paradigms. At ISC Read more…

Intel to Acquire AI Startup Nervana Systems

Aug 9, 2016 |

If we needed another sign that Intel is serious about mining AI market opportunities, it came today when the chip company announced it had inked a “definitive agreement” to acquire artificial intelligence and deep learning company Nervana Systems. Financial terms haven’t been disclosed yet, but a source familiar with the deal told Recode it’s worth more Read more…

MPI Is Not Perfect … Yet

Aug 3, 2016 |

The Message Passing Interface (MPI) is the standard definition of a communication API that has underpinned traditional HPC for decades. The message passing programming represents distributed-memory hardware architectures using processes that send messages to each other. When first standardised in 1993-4, MPI was a major step forward from the many proprietary, system-dependent, and semantically different message-passing libraries that came before it.

FlyElephant 2.0 Is Now Available

Jul 28, 2016 |

July 28 — The FlyElephant team is happy to announce the release of the platform FlyElephant 2.0, with following updates: internal expert community, collaboration on projects, public tasks, Docker and Jupyter support, a new file storage system and work with HPC clusters. FlyElephant is a platform for data scientists, engineers and scientists, which provides a ready-computing infrastructure for Read more…

MIT’s Multicore Swarm Architecture Advances Ordered Parallelism

Jul 21, 2016 |

A relatively new architecture explicitly designed for parallelism – Swarm – based on work at MIT has shown promise for substantially speeding up classes of applications (graphs, for example) and decreasing the programming burden to achieve parallelism. The work, recounted in a recent paper, Unlocking Ordered Parallelism with the Swarm Architecture, bucks conventional wisdom and Read more…

RISC-V Startup Aims to Democratize Custom Silicon

Jul 13, 2016 |

Momentum for open source hardware made a significant advance this week with the launch of startup SiFive and its open source chip platforms based on the RISC-V instruction set architecture. The founders of the fabless semiconductor company — Krste Asanovic, Andrew Waterman, and Yunsup Lee — invented the free and open RISC-V ISA at the University of California, Berkeley, six years ago. The progression of RISC-V and the launch of SiFive opens the door to a new way of chip building that skirts prohibitive licensing costs and lowers the barrier to entry…

ISC Workshop Tackles the Co-development Challenge

Jul 12, 2016 |

The long percolating discussion over ‘co-development’ and how best it should be undertaken has gained new urgency in the race towards exascale computing. At a workshop held at ISC2016 last month – Form Follows Function: Do algorithms and applications challenge or drag behind the hardware evolution? – several distinguished panelists offered varying viewpoints. Yesterday, session organizer Tobias Weinzierl posted a summary synopsis of the workshop discussion on Weinzierl (Durham University) and co-organizer Michael Bader (Technische Universität München) are active participants in the ExaHyPE project (An Exascale Hyperbolic PDE (partial differential equation) Engine, funded by EU’s Horizon 2020 program).