Ready-to-deploy deep learning solutions

July 23, 2018

Accelerate your deep learning project deployments with Radeon Instinct™ powered solutions

Deep learning adoption is lagging as companies struggle with how to make it work. Now a new ecosystem is rising to deliver the integrated pieces that ultimately will be part of one turnkey system for deep learning.

Automation has proved its worth in meeting IT and business objectives. Even so, efficiencies in automation and work augmentation software can be greatly enhanced with deep learning. Yet deep learning adoption rates are low. That’s in part because the tech is difficult, and the talent pool is thin. The good news is that an ecosystem is forming and already beginning to resolve some of these issues as it continues to grow towards becoming a single turnkey system.

Why it takes an ecosystem

A Deloitte report found that fewer than 10% of the companies surveyed across 17 countries invested in machine learning. The chief reasons for the adoption gap is a lack of understanding on how to use the technology, an insufficient amount of data to train it with, and a shortage of talent who could make it all work. Translated in the simplest of terms, deep learning is perceived by some to be too hard to deploy for practical use.

The solution for that dilemma is what it has always been for any new technology requiring esoteric skill sets and faced with a talent shortage – build an easy-to-use, turnkey system. That is, of course, easier said than done.

“The ongoing digital revolution, which has been reducing frictional, transactional costs for years, has accelerated recently with tremendous increases in electronic data, the ubiquity of mobile interfaces, and the growing power of artificial intelligence (AI),” according to a McKinsey & Company report.

“Together, these forces are reshaping customer expectations and creating the potential for virtually every sector with a distribution component to have its borders redrawn or redefined, at a more rapid pace than we have previously experienced.”

That’s why today’s sophisticated and complex systems are commonly constructed not by a single vendor but by a strong and diverse ecosystem capable of delivering the many moving parts needed to make a single turnkey system. Especially when said systems must be equally workable for companies across industries and with diverse needs.

As a result, ecosystems are growing at breathtaking speeds. McKinsey & Company analysts predict that new ecosystems are likely to entirely replace many traditional industries by 2025.

Such an ecosystem is forming for machine learning. It’s seeded with four recently launched, ready-to-deploy solutions. They center on AMD’s Radeon Instinct training accelerator for machine learning, and its ROCm Open eCosystem (ROCm), an open source HPC/Hyperscale-class platform for GPU computing. AMD takes open source all the way down to the graphics card level.

Open source is key to successfully wrangling machine learning systems as it leverages the skills and coding work from entire communities and makes an ecosystem functional across technologies and applications.

The ROCm open ecosystem

This newly forming ecosystem is optimal for beginning or expanding your deep learning efforts whether you are the IT person looking to get pre-configured deep learning technologies in place, or the scientist who just needs access to HPC systems with one of the frameworks loaded. Either way, users can quickly get to work with their data. Developers also have full and open access to the hardware and software which speeds their work in developing frameworks.

Everything AMD develops for its Radeon Instinct system is open source and available on GitHub. The company also has docker containers for easier installs of ROCm drivers and frameworks which can be found on the ROCm site for Docker.  Caffe and TensorFlow machine learning frameworks are offered now, with more to follow soon.

A deep learning solutions page has gone live, which features the four systems that service as the bud of the blooming ecosystem rooted in AMD technologies. The frameworks docker containers will be listed there as well.

This budding machine learning ecosystem is already bearing fruit for organizations looking to launch machine learning training and applications with a minimum of technical effort and expertise by combining:

  • Fast and easy server deployments
  • ROCm Open eCosystem and infrastructure
  • Deep learning framework docker containers
  • Optimized MIOpen framework libraries

The four systems forming the ecosystem center

“Data science is a mix of art and science—and digital grunt work. The reality is that as much as 80 percent of the work on which data scientists spend their time can be fully or partially automated,” according to a Deloitte report.

This newly forming ecosystem is focused on automating much of the machine learning processes. While complicated to achieve, the end results are far easier for organizations to use.

Deloitte identified five key vectors of progress that should help foster significantly greater adoption of machine learning by making it more accessible. “Three of these advancements—automation, data reduction, and training acceleration—make machine learning easier, cheaper, and/or faster. The others—model interpretability and local machine learning—open up applications in new areas,” according to the Deloitte report.

There are four prebuilt systems shaping this ecosystem early on. Each is provided by an independent partner and built on or for AMD’s Radeon Instinct and ROCm platforms, but their initial presentations are at varying levels of integration. While more partners will join the ecosystem over time, these four provide a solid bedrock for organizations looking to get started in machine learning now.

1) AMAX is providing systems with preloaded ROCm drivers and a choice of framework, either TensorFlow or Café, for machine learning, advanced rendering and HPC applications.

2) Exxact is similarly providing multi-GPU Radeon Instinct-based systems with preloaded ROCm drivers and frameworks for deep learning and HPC-class deployments, where performance per watt is important.

3) Inventec provides optimized high performance systems designed with AMD EPYC™ processors and Radeon Instinct compute technologies capable of delivering up to 100 teraflops of FP16 compute performance for deep learning and HPC workloads.

4) Supermicro is providing SuperServers supporting Radeon Instinct machine learning accelerators for AI, big data analytics, HPC, and business intelligence applications.

The payoff from leveraging the technologies in a machine learning ecosystem potentially comes in many forms.

“A growing number of tools and techniques for data science automation, some offered by established companies and others by venture-backed start-ups, can help reduce the time required to execute a machine learning proof of concept from months to days. And, automating data science means augmenting data scientists’ productivity in the face of severe talent shortages,” say the Deloitte researchers.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputing Helps Explain the Milky Way’s Shape

September 30, 2022

If you look at the Milky Way from “above,” it almost looks like a cat’s eye: a circle of spiral arms with an oval “iris” in the middle. That iris — a starry bar that connects the spiral arms — has two stran Read more…

Top Supercomputers to Shake Up Earthquake Modeling

September 29, 2022

Two DOE-funded projects — and a bunch of top supercomputers — are converging to improve our understanding of earthquakes and enable the construction of more earthquake-resilient buildings and infrastructure. The firs Read more…

How Intel Plans to Rebuild Its Manufacturing Supply Chain

September 29, 2022

Intel's engineering roots saw a revival at this week's Innovation, with attendees recalling the show’s resemblance to Intel Developer Forum, the company's annual developer gala last held in 2016. The chipmaker cut t Read more…

Intel Labs Launches Neuromorphic ‘Kapoho Point’ Board

September 28, 2022

Over the past five years, Intel has been iterating on its neuromorphic chips and systems, aiming to create devices (and software for those devices) that closely mimic the behavior of the human brain through the use of co Read more…

DOE Announces $42M ‘COOLERCHIPS’ Datacenter Cooling Program

September 28, 2022

With massive machines like Frontier guzzling tens of megawatts of power to operate, datacenters’ energy use is of increasing concern for supercomputer operations – and particularly for the U.S. Department of Energy ( Read more…

AWS Solution Channel

Shutterstock 1818499862

Rearchitecting AWS Batch managed services to leverage AWS Fargate

AWS service teams continuously improve the underlying infrastructure and operations of managed services, and AWS Batch is no exception. The AWS Batch team recently moved most of their job scheduler fleet to a serverless infrastructure model leveraging AWS Fargate. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1166887495

Improving Insurance Fraud Detection using AI Running on Cloud-based GPU-Accelerated Systems

Insurance is a highly regulated industry that is evolving as the industry faces changing customer expectations, massive amounts of data, and increased regulations. A major issue facing the industry is tracking insurance fraud. Read more…

Do You Believe in Science? Take the HPC Covid Safety Pledge

September 28, 2022

ISC 2022 was back in person, and the celebration was on. Frontier had been named the first exascale supercomputer on the Top500 list, and workshops, poster sessions, paper presentations, receptions, and booth meetings we Read more…

How Intel Plans to Rebuild Its Manufacturing Supply Chain

September 29, 2022

Intel's engineering roots saw a revival at this week's Innovation, with attendees recalling the show’s resemblance to Intel Developer Forum, the company's ann Read more…

Intel Labs Launches Neuromorphic ‘Kapoho Point’ Board

September 28, 2022

Over the past five years, Intel has been iterating on its neuromorphic chips and systems, aiming to create devices (and software for those devices) that closely Read more…

HPE to Build 100+ Petaflops Shaheen III Supercomputer

September 27, 2022

The King Abdullah University of Science and Technology (KAUST) in Saudi Arabia has announced that HPE has won the bid to build the Shaheen III supercomputer. Sh Read more…

Intel’s New Programmable Chips Next Year to Replace Aging Products

September 27, 2022

Intel shared its latest roadmap of programmable chips, and doesn't want to dig itself into a hole by following AMD's strategy in the area.  "We're thankfully not matching their strategy," said Shannon Poulin, corporate vice president for the datacenter and AI group at Intel, in response to a question posed by HPCwire during a press briefing. The updated roadmap pieces together Intel's strategy for FPGAs... Read more…

Intel Ships Sapphire Rapids – to Its Cloud

September 27, 2022

Intel has had trouble getting its chips in the hands of customers on time, but is providing the next best thing – to try out those chips in the cloud. Delayed chips such as Sapphire Rapids server processors and Habana Gaudi 2 AI chip will be available on a platform called the Intel Developer Cloud, which was announced at the Intel Innovation event being held in San Jose, California. Read more…

More Details on ‘Half-Exaflop’ Horizon System, LCCF Emerge

September 26, 2022

Since 2017, plans for the Leadership-Class Computing Facility (LCCF) have been underway. Slated for full operation somewhere around 2026, the LCCF’s scope ext Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Nvidia Introduces New Ada Lovelace GPU Architecture, OVX Systems, Omniverse Cloud

September 20, 2022

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

US Senate Passes CHIPS Act Temperature Check, but Challenges Linger

July 19, 2022

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

Exclusive Inside Look at First US Exascale Supercomputer

July 1, 2022

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

Leading Solution Providers

Contributors

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Nvidia, Qualcomm Shine in MLPerf Inference; Intel’s Sapphire Rapids Makes an Appearance.

September 8, 2022

The steady maturation of MLCommons/MLPerf as an AI benchmarking tool was apparent in today’s release of MLPerf v2.1 Inference results. Twenty-one organization Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Not Just Cash for Chips – The New Chips and Science Act Boosts NSF, DOE, NIST

August 3, 2022

After two-plus years of contentious debate, several different names, and final passage by the House (243-187) and Senate (64-33) last week, the Chips and Science Act will soon become law. Besides the $54.2 billion provided to boost US-based chip manufacturing, the act reshapes US science policy in meaningful ways. NSF’s proposed budget... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire