ISC Cloud 2012 BOFs: Applications/Software, Reference Architectures and Data Transfer

By Nicole Hemsoth

October 2, 2012

At ISC Cloud 2012, talking points for the Birds of a Feather sessions were hand-picked by the participants. While the importance of security was a key theme throughout the two-day event, several other salient topics emerged during the voting process. The finalized BoF roster included “Applications and software in the cloud,” “HPC Cloud Reference Architectures” and “Data Transfer in/out of Clouds” to be held in parallel. Each group had about 10-15 participants discussing the challenges and implications of their chosen topic. After the conference, the panel moderators each submitted their notes on their findings.

BOF 1: Applications/Software in the Cloud

Moderator: David Wallom, Oxford eResearch Centre

The discussion was started with the consideration of how cloud computing could change the supply of application software with the possibility of ISV partnering with cloud providers to change the delivery model. This would allow application flexibility, but it was pointed out that there is an inherent unpredictability of a pay-as-you-go (PAYG) model. It may be an issue for those groups that have been previously subjected to a fairly stable cost model, though in many other areas PAYG is becoming more normal. A problem is that in current IaaS cloud models, costing is not simple and there may be resistance to the introduction of new business models from long-term users.

It was pointed out that it isn’t just the end-user applications but also all other components. An illustration of PAYG for areas other than end-user applications, which clearly shows one of the problems with other models is where LSF is an annual license, even though it may only be required a few times (less than 10).

With this change of model, how do we support the legacy application? This will depend on the type: Community applications that are open source will have to rely on their community and commercial applications will require their users to ‘gang up’ as it were. However, there are problems with a SaaS delivery mechanism since there could be resultant legacy version support required as many commercial customers want longevity. Over the longer term, cloud migration means users will have to be more used to version migration, and if so, application providers will have to make sure version migration is easier.

The level of cloud utilization will depend on the different application communities and different maturities of software. The possibility of flexibility is strongest where software is newest, i.e., application users do not favor one model over another. It is unlikely that cloud will affect the application design model to change MPI, and thus OpenMP will still need to be supported. On a longer term, the different types of interconnection software (MPI/OpenMP) won’t matter as the hardware will catch up with newer ideas.

We mustn’t forget that software isn’t just the application but also the networks that exist around it: Community-as-a-Service and Support-as-a-Service.

Of course, less data means that it is easier to move to the cloud, but if you can do more operations on your data in the cloud then this becomes less important, for example, only downloading the important result although this may require workflow in the cloud.

With the emergence of standard APIs for different components, the time is right for application designers to accept these changes in models by moving to the most advantageous cloud provider. We must ensure that application designers learn lessons from the previous instances where public cloud providers changed their models and made previous design decisions irrelevant or less than optimal.

  • It is a whole ecosystem. Remember that:

    • The user decides on the software that best solves their problem. End users don’t care, and they just want solutions.

    • Hardware licensing versus software licensing costs can be decisive.

    • Optimization for many different types of use cases can lead to different types of hardware solutions.

    • Cloud provider chooses hardware, software, interconnects, .i.e., the most efficient solution.

    • Community clouds targeted to different communities are not inevitable but likely as different ISV and communities get together to best optimize their requirements and solutions together.

    • Whatever use of cloud or otherwise we decide on has to fit with other parts of the business model/activity.

Cloud providers have the opportunities to get away from unnecessary user complications and also support their users with new models. There are good opportunities for long term relationships between ISV and cloud providers.

Finally the difference between the cloud and Application Service Provider (which we have had for around ten years) was discussed. It was brought to light that the quality/ubiquity of the network resources and the sheer number of resource types have changed.

BOF 2: Reference Architecture

Moderator: Josh Simons, VMware

Two basic models for moving HPC workloads into a cloud environment:

1. Virtual clusters formed by creating a persistent set of virtual machines on demand. Each virtual machine runs the same software stack (OS, libraries, batch scheduler, etc.) as was used in a bare-metal environment. This is desirable because from an end-user/scientist perspective, the interface to the compute interface remains the same: they use the same batch scheduler interfaces. The use of virtual machines is transparent to the end-user.

2. Virtual machines are created on demand to run each job and they persist only for the lifetime of the job. This allows each job to run with its own custom software stack, for individual jobs to be migrated dynamically across the virtual infrastructure for load-balancing, resiliency, or power management. This is not an evolutionary model in that the end-user would need to interact with either a new software layer that understands how to launch VMs rather than scheduling onto existing cluster nodes. This could be an entirely new layer or an augmented version of existing job schedulers.

It was noted that hybrids of the above two approaches could be used as well.

The following components were identified as being critical pieces of reference architecture for HPC in the cloud. (Not an exhaustive list.)

  • Self-service capabilities to enable end-users to create clusters on the fly.

  • A catalogue of virtual machines and software stacks that can be used to create these virtual clusters.

  • A provisioning engine to instantiate these virtual machines (it was noted that Open Stack work on “placement groups” is relevant).

  • An ability to elastically flex compute resources up and down as needs change.

  • A monitoring component to watch the health and performance of the infrastructure.

  • Billing and chargeback.

  • Data staging components – to move data in and out of the cloud.

  • Policy-based resource control mechanisms to mediate access to hardware resources between multiple cloud tenants.

  • Security – data security and protection and secure isolation of workloads in a multi-tenant environment.

It was noted that a “cloud” might not be virtualized, though virtualization was seen to make a number of the above functions easier to deliver.

It was posited that once HPC moves into the cloud, there will be a need to support complex applications that require cross-cloud workflows, similar to some of the meta-computing concepts developed within the grid computing realm. It was noted that if “cloud” is the follow-on to grid computing, then it would be useful to examine grid architectures closely, to determine which features should be brought forward into mainstream cloud architectures.

There are problems still to be solved if HPC is to move into the cloud. Some are technical – end-to-end automation of the use of HPC in the cloud. Others are business related: licensing, politics, and budgetary. The budgetary issue is particularly interesting: In the face of “unlimited” compute resources, how does an organization control access to limit its budgetary spend? This is particularly important for HPC workloads, which as we know can consume all available resources at a site. What happens when such users get access to unlimited resources in the cloud? Answering these questions will likely uncover additional required components for an HPC cloud reference architecture.

BOF 3: Data Transport

Moderator: Rolf Sperber, Alcatel-Lucent

Size Matters

There has to be a differentiation concerning the size of datasets to be transported in and out of the cloud. The target is optimized access – it can be achieved for small amount of data if there is a predictable way of accessing required data or moving data in or out of the cloud. For large datasets to be transported, the quality of service will have to be guaranteed for longer periods of time.

Small Data

To have instant access to data in a cloud, current metadata will not be sufficient. A software that has knowledge of the network infrastructure and defines a virtual network on demand is required. Multiple carrier and in consequence multiple vendor environments will have to be taken into account.

Big Data

This is about huge datasets to be transported over long distances. Final target is to have predictable transfer times for multiple datasets to be transported to a single location.

First Iteration

  • Federation of folders into a single folder with a metadata server to keep track of size, locality, etc.

  • Optimize transport by means of adequate transfer software. Here we are talking about software products (most of them commercial) that help solve the TCP problem

  • Optimize access by proactive distribution if possible. Here settled paradigms of work will have to be overcome.

Second Iteration

  • Optimize transport requirements with respect to site of computation.

  • Provide network control to enable clients to define an appropriate virtual network.

    • Multiple carriers with heterogeneous environments to be taken into account.

    • Charging models to be implemented.

Third iteration or Target

  • Further optimize applications to minimize transport requirements.

  • Integrate network control into applications.

    • Federation

    • Software defined networking taking care of both dedicated instance of time when transfer starts and duration of transfer in relation to size.

    • SDN calculating both routes and time of reservation.

    • SDN calculating total duration.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

US Exascale Computing Update with Paul Messina

December 8, 2016

Around the world, efforts are ramping up to cross the next major computing threshold with machines that are 50-100x more performant than today’s fastest number crunchers.  Read more…

By Tiffany Trader

Weekly Twitter Roundup (Dec. 8, 2016)

December 8, 2016

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

Qualcomm Targets Intel Datacenter Dominance with 10nm ARM-based Server Chip

December 8, 2016

Claiming no less than a reshaping of the future of Intel-dominated datacenter computing, Qualcomm Technologies, the market leader in smartphone chips, announced the forthcoming availability of what it says is the world’s first 10nm processor for servers, based on ARM Holding’s chip designs. Read more…

By Doug Black

Which Schools Produce the Top Coders in the World?

December 8, 2016

Ever wonder which universities worldwide produce the best coders? The answers may surprise you, at least as judged by the results of a competition posted yesterday on the HackerRank blog. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines. Read more…

By John Russell

DDN Enables 50TB/Day Trans-Pacific Data Transfer for Yahoo Japan

December 6, 2016

Transferring data from one data center to another in search of lower regional energy costs isn’t a new concept, but Yahoo Japan is putting the idea into transcontinental effect with a system that transfers 50TB of data a day from Japan to the U.S., where electricity costs a quarter of the rates in Japan. Read more…

By Doug Black

Infographic Highlights Career of Admiral Grace Murray Hopper

December 5, 2016

Dr. Grace Murray Hopper (December 9, 1906 – January 1, 1992) was an early pioneer of computer science and one of the most famous women achievers in a field dominated by men. Read more…

By Staff

Ganthier, Turkel on the Dell EMC Road Ahead

December 5, 2016

Who is Dell EMC and why should you care? Glad you asked is Jim Ganthier’s quick response. Ganthier is SVP for validated solutions and high performance computing for the new (even bigger) technology giant Dell EMC following Dell’s acquisition of EMC in September. In this case, says Ganthier, the blending of the two companies is a 1+1 = 5 proposition. Not bad math if you can pull it off. Read more…

By John Russell

US Exascale Computing Update with Paul Messina

December 8, 2016

Around the world, efforts are ramping up to cross the next major computing threshold with machines that are 50-100x more performant than today’s fastest number crunchers.  Read more…

By Tiffany Trader

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines. Read more…

By John Russell

Ganthier, Turkel on the Dell EMC Road Ahead

December 5, 2016

Who is Dell EMC and why should you care? Glad you asked is Jim Ganthier’s quick response. Ganthier is SVP for validated solutions and high performance computing for the new (even bigger) technology giant Dell EMC following Dell’s acquisition of EMC in September. In this case, says Ganthier, the blending of the two companies is a 1+1 = 5 proposition. Not bad math if you can pull it off. Read more…

By John Russell

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

HPE-SGI to Tackle Exascale and Enterprise Targets

November 22, 2016

At first blush, and maybe second blush too, Hewlett Packard Enterprise’s (HPE) purchase of SGI seems like an unambiguous win-win. SGI’s advanced shared memory technology, its popular UV product line (Hanna), deep vertical market expertise, and services-led go-to-market capability all give HPE a leg up in its drive to remake itself. Bear in mind HPE came into existence just a year ago with the split of Hewlett-Packard. The computer landscape, including HPC, is shifting with still unclear consequences. One wonders who’s next on the deal block following Dell’s recent merger with EMC. Read more…

By John Russell

Why 2016 Is the Most Important Year in HPC in Over Two Decades

August 23, 2016

In 1994, two NASA employees connected 16 commodity workstations together using a standard Ethernet LAN and installed open-source message passing software that allowed their number-crunching scientific application to run on the whole “cluster” of machines as if it were a single entity. Read more…

By Vincent Natoli, Stone Ridge Technology

IBM Advances Against x86 with Power9

August 30, 2016

After offering OpenPower Summit attendees a limited preview in April, IBM is unveiling further details of its next-gen CPU, Power9, which the tech mainstay is counting on to regain market share ceded to rival Intel. Read more…

By Tiffany Trader

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

Think Fast – Is Neuromorphic Computing Set to Leap Forward?

August 15, 2016

Steadily advancing neuromorphic computing technology has created high expectations for this fundamentally different approach to computing. Read more…

By John Russell

ARM Unveils Scalable Vector Extension for HPC at Hot Chips

August 22, 2016

ARM and Fujitsu today announced a scalable vector extension (SVE) to the ARMv8-A architecture intended to enhance ARM capabilities in HPC workloads. Fujitsu is the lead silicon partner in the effort (so far) and will use ARM with SVE technology in its post K computer, Japan’s next flagship supercomputer planned for the 2020 timeframe. This is an important incremental step for ARM, which seeks to push more aggressively into mainstream and HPC server markets. Read more…

By John Russell

IBM Debuts Power8 Chip with NVLink and Three New Systems

September 8, 2016

Not long after revealing more details about its next-gen Power9 chip due in 2017, IBM today rolled out three new Power8-based Linux servers and a new version of its Power8 chip featuring Nvidia’s NVLink interconnect. Read more…

By John Russell

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

Leading Solution Providers

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Intel Launches Silicon Photonics Chip, Previews Next-Gen Phi for AI

August 18, 2016

At the Intel Developer Forum, held in San Francisco this week, Intel Senior Vice President and General Manager Diane Bryant announced the launch of Intel's Silicon Photonics product line and teased a brand-new Phi product, codenamed "Knights Mill," aimed at machine learning workloads. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Micron, Intel Prepare to Launch 3D XPoint Memory

August 16, 2016

Micron Technology used last week’s Flash Memory Summit to roll out its new line of 3D XPoint memory technology jointly developed with Intel while demonstrating the technology in solid-state drives. Micron claimed its Quantx line delivers PCI Express (PCIe) SSD performance with read latencies at less than 10 microseconds and writes at less than 20 microseconds. Read more…

By George Leopold

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This