Seagate-led SAGE Project Delivers Update on Exascale Goals

By John Russell

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. It outlines technical details of progress to date and architectural plans moving forward. Of particular note is progress on co-design for use cases and applications expected to benefit most from exascale. There’s also been a fair amount of work to be able to accommodate big data and traditional HPC workflows in the same environment.

“We’ve tried to give ourselves lofty goals,” said Malcolm Muggeridge, senior engineering director at Seagate based in the U.K. who is leading the initiative. “We would like to become the platform of choice in exascale for storage solutions and will have the technology addressing that space in the 2022 timeframe. The main piece of work that has been completed [so far] is co-design activities.”

You may recall that SAGE (StorAGe for Exascale Data Centric Computing (SAGE) system aims to implement a Big Data/Extreme Computing (BDEC) and High Performance Data Analytics (HPDA) capable infrastructure suitable for Extreme scales – including Exascale and beyond. SAGE is one of 15 projects recently funded under Horizon 2020. Direct funding is actually through the European Technology Platforms (ETP) organization – “industry-led stakeholder groups recognized by the European Commission as key actors in driving innovation, knowledge transfer and European competitiveness. ETPs develop research and innovation agendas and roadmaps for action at EU and national level to be supported by both private and public funding.”

sage-seagate-architectureThe new white paper is a fairly extensive document that follows a nine-month formal project review last June and includes work completed since. Among the topics covered are: platform requirements; systems architecture; platform components; and ecosystem elements. Launched in September of 2015, SAGE tackles eight research areas: “the study of the 1) application use cases co-designing solutions to address 2) Percipient Storage Methods, 3) Advanced Object Storage, and 4) tools for I/O optimization, supporting 5) next generation storage media and developing a supporting ecosystem of 6) Extreme Data Management, 7) Programming techniques and 8) Extreme Data Analysis tools.”

According to the report, the SAGE storage system will be capable of efficiently storing and retrieving immense volumes of data at extreme scales, with the added functionality of “percipience” or the ability to accept and perform user defined computations integral to the storage system. SAGE will be built around the Mero object storage software platform and its supporting ecosystem of tools and techniques, that will work together to provide the required functionalities and scaling desired by extreme scale workflows.

One important goal is accommodating new storage technologies, such as non-volatile RAM (NVRAM). Leveraging object storage to assist ‘in-memory, closer-to-memory” computing is another. In an earlier interview Sai Narasimhamurthy, Seagate research staff engineer responsible for coordinating the technical work, told HPCwire that the stack would “have memory at the top, various NVRAM technologies in the middle, of course you have your flash technology as well as part of the stack, and then you have scratch disks and then archival disks.”

“You could have an object, or a piece of it, lying in high speed memory, a piece of it in NVRAM, and a piece of the object lying in scratch based upon the usage profile of the object,” explained Narasimhamurthy. “The view of the object is transparent to the application, it’s just I0 to an object, but on the back end you could have various types of layout which could be very interesting because you could optimize your layout for performance or for resiliency. You could do all sorts of things.”

sage-seagate-codesignClearly there are big goals for the project. Co-design is a critical early element in defining functional requirements, emphasized Muggeridge, “We have carefully selected use cases that reflect these data-centric applications. The use cases provide specific inputs that are designed to fine tune/modify the framework for the SAGE architecture.”

Muggeridge noted there is range of requirements drivers. The report calls out: inputs from the BDEC community and the US Department of Energy labs; data needs for big science, as exemplified by the Square Kilometer Array and the Human Brain Project; and Extreme scale I/O requirements drafted by the ETP; and extreme scale data needs highlighted by the HPDA community. The information was gathered mostly through workshops.

Top-level objectives have also been established and are largely familiar. One calls for the ability “to store and retrieve extreme volumes of data approaching orders of ~Exabyte for a given problem”. Another is the ability to manage workflows that include data from simulations and instruments. Not surprisingly, data IO rates, data integrity, data analytics, among other capabilities are being targeted. Indeed the first part of the project has been largely ‘definitional’ with a roll out of demonstrations planned for the next year.

Use of co-design principles to inform these objectives is a distinguishing feature of the project. SAGE has selected several use cases (applications) and spelled out in detail the parameters being measured. Use cases “cover a broad range of domains, including data from some of the world’s largest scientific experiments (including one of the world’s largest nuclear fusion facilities and one of the largest synchrotrons in Europe), aside from extremely data-centric HPC codes.” Below is a table with the uses cases selected.

sage-seagate-use-cases

So far, SAGE has gathered the first formal list of inputs from all of the specified use cases. “This phase included gathering inputs on formal I/O characterization, SAGE architecture analysis, data retention characterization and data scaling analysis, which was an analytical study of how data and I/O requirements of the use cases would scale on a future basis.”

sage-seagate-metrics

The SAGE system is built on multiple tiers of storage device hardware technology (see figure below). SAGE does not require a specific type of storage device technology, but typically it would include at least one NVRAM tier (Intel 3DxPoint technology is a strong contender at the moment), at least one flash tier and at least one disk tier. Together, these tiers are housed in standard form-factor enclosures and provide their own compute capability, enabled by standard x86 embedded processing components. Moving up the system stack, compute capability increases for faster, lower latency devices.

Mero, the object storage software first developed by Xyratex and now being extended by Seagate, is layered on top of this hardware stack, providing fundamental management of object I/O and storage across tiers. Essentially, Mero forms the core of the SAGE system. Mero is presented to users through the Clovis API. Everything above Clovis forms the SAGE ecosystem components.

sage-seagate-system-stack

Much remains to be done but it seems as if SAGE is making steady progress. Demonstrations, some at the Julich Supercomputing Centre, are expected over the next year or so. This latest paper is best read in full for current technical details of SAGE plans.

Link to new SAGE paper (Data Storage for Extreme Scale): http://sagestorage.eu/sites/default/files/Sage%20White%20Paper%20v1.0.pdf

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HPC in Life Sciences Part 1: CPU Choices, Rise of Data Lakes, Networking Challenges, and More

February 21, 2019

For the past few years HPCwire and leaders of BioTeam, a research computing consultancy specializing in life sciences, have convened to examine the state of HPC (and now AI) use in life sciences. Without HPC writ large, modern life sciences research would quickly grind to a halt. It’s true most life sciences research computing... Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized silicon designs catered toward general-purpose cloud computing Read more…

By Tiffany Trader

The Internet of Criminal Things—Trust in the Gods but Verify!

February 20, 2019

“Are we under attack?” asked Professor Elmarie Biermann of the Cyber Security Institute during the recent South African Centre for High Performance Computing’s (CHPC) National Conference in Cape Town. A quick show Read more…

By Elizabeth Leake, STEM-Trek

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

The Perils of Becoming Trapped in the Cloud

Terms like ‘open systems’ have been bandied about for decades. While modern computer systems are relatively open compared to their predecessors, there are still plenty of opportunities to become locked into proprietary interfaces. Read more…

Machine Learning Takes Heat for Science’s Reproducibility Crisis

February 19, 2019

Scientists are raising red flags about the accuracy and reproducibility of conclusions drawn by machine learning frameworks. Among the remedies are developing new ML systems that can question their own predictions, show Read more…

By George Leopold

HPC in Life Sciences Part 1: CPU Choices, Rise of Data Lakes, Networking Challenges, and More

February 21, 2019

For the past few years HPCwire and leaders of BioTeam, a research computing consultancy specializing in life sciences, have convened to examine the state of HPC (and now AI) use in life sciences. Without HPC writ large, modern life sciences research would quickly grind to a halt. It’s true most life sciences research computing... Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from the nanoscale to the astronomic, from calculating quantum effects in new materials to supporting bioinformatics for advanced healthcare research to screening millions of possible chemical combinations to attack a deadly virus. Read more…

By Ken Strandberg

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

Iowa ‘Grows Its Own’ to Fill the HPC Workforce Pipeline

February 13, 2019

The global workforce that supports advanced computing, scientific software and high-speed research networks is relatively small when you stop to consider the magnitude of the transformative discoveries it empowers. Technical conferences provide a forum where specialists convene to learn about the latest innovations and schedule face-time with colleagues from other institutions. Read more…

By Elizabeth Leake, STEM-Trek

Trump Signs Executive Order Launching U.S. AI Initiative

February 11, 2019

U.S. President Donald Trump issued an Executive Order (EO) today launching a U.S Artificial Intelligence Initiative. The new initiative - Maintaining American L Read more…

By John Russell

Celebrating Women in Science: Meet Four Women Leading the Way in HPC

February 11, 2019

One only needs to look around at virtually any CS/tech conference to realize that women are underrepresented, and that holds true of HPC. SC hosts over 13,000 H Read more…

By AJ Lauer

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This