Micron Exposes the Double Life of Memory with Automata Processor

By Nicole Hemsoth

November 22, 2013

If we had to take a pick from some of the most compelling announcements from SC13, the news from memory vendor (although that narrow distinction may soon change) Micron about its new Automata processor is at the top of the list. While at this point there’s still enough theory to lead us to file this under a technology to watch, the concept is unique in what it promises—both to Micron’s future and the accelerator/CPU space for some key HPC-oriented workloads.

In a nutshell, the Automata processor is a programmable silicon device that lends itself to handling high speed search and analysis across massive, complex, unstructured data. As an alternate processing engine for targeted areas, it taps into the inner parallelism inherent to memory to provide a robust and absolutely remarkable (if early benchmarks are to be believed) option for certain types of processing.

specs2

For starters, here’s what not to expect from Micron’s foray into the processor jungle. First, this is not something that will snap in to replace CPUs. Despite what some of the recent press elsewhere has described, these are a lot less like pure CPU competitors (at least at this point) and more like specialty accelerators (think FPGAs versus Xeons, for example).  These have been designed for a specific set of workloads, including network security, image processing, bioinformatics and select codes that propel the work of our three-letter overlords. The benefit here is that these are programmable, and in some ways reconfigurable and can chew on large-scale unstructured data analytics problems that the average conventional fixed word-width processors can’t always handle well.

Paul Dlugosch is director of Automata Processor Development in the Architecture Development Group of Micron’s DRAM division. “One thing people don’t understand well, aside from those memory researchers or people in this industry, is that any memory device is by nature a very highly parallel device. In fact, he says, “most of the power of that parallelism is left on the table and unused.”

He said that Micron has been stealthily developing their Automata technology for seven years—a process that was fed by a fundamental change in how they were thinking about memory’s role in large-scale systems. As Dlugosch told us, his company has been instrumental in rethinking memory with the Hybrid Memory Cube, but the memory wall needed some new ladders. The first rungs of which were those realizations that memory could be doing double-duty, so to speak.

At the beginning of their journey into automata territory, he said there were some fundamental questions about what caused the saturation of the memory interface and whether or not simply increasing bandwidth was the right approach. From there they started to think beyond the constraints of modern architectures in terms of how memory evolved in the first place.

Among the central questions are whether or not memory could be used as something other than a storage device. Further, the team set about investigating whether multicore concepts offered the shortest inroads to a high degree of parallelism. Also, they wondered if software that is comprised of sequential instructions and issued to an execution pipeline was a necessary component of systems or if there was a better way.

What’s most interesting about these lines of questioning is that his team started to realize that it might be possible that the memory wall was not erected because of memory bandwidth, but rather it was the symptom of a more profound root cause found elsewhere. That hidden weak point, said Dlugosch, is overall processor inefficiency. “What’s different about the automata processor is that rather than just trying to devise a means to transfer more information across a physical memory interface, we instead started asking why the mere need for high bandwidth is present.”

Micron Automata slide

The specs you see there are a bit difficult to make sense of since semiconductors aren’t often measured in this way. For example, placing value on how many path decisions can be made per second in a semiconductor device working on graph problems or executing non-deterministic finite automata is a bit esoteric, but even with a basic grasp consider that in one single Automata processor it has this capacity. And you’re not limited to one, either, since this is a scalable mechanism. The Automata director tells us that this is, in theory, as simple as adding more memory. In other words, one can put 8 Automata processors on a memory module–that memory module can then plug into a DIMM, and since you can have more than one it’s possible that it can scale this processing power just like memory.

What one can expect on the actual “real” use front is a fully developed SDK that will let end users compile automata and load those into the processor fabric, allowing them to execute as many automata in parallel against large datasets as the user can fit into one or more of the Automata processors. The idea here is that users will develop their own machines.

As one might imagine, however, the programming environment presents some significant challenges, but Micron is tapping into some of its early partners to make some inroads into this area. Their base low-level underpinnings are, as Dlugosch admitted, “not as expressive as we’d like it to be to get the full power from this chip,” but they’re working it via their own ANML (Automata Network Markup Language) to let users construct Automata machines programmatically or almost in the sense of a full custom design Micron supports via a visual workbench. “You can think of it like circuit design for the big data analytics machines that users want to deploy in the fabric,” he said.

Outside of the technology itself, one should note that Micron is leveraging an existing process and facility to manufacture this processor. In other words, despite the long R&D cycle behind it, the overhead for production looks to be relatively minimal.

Automata processing is a fringe concept, but one that was obscure enough for Micron to take to market in the name. “A lot of people aren’t familiar with automata,” said Dlugosch. “We thought about this a great deal before we decided to call this an automata processor—even though automata are implemented as conventional algorithms in a variety of ways in a variety of applications. They’re not always recognized as automata, but in the areas and end use cases we’re targeting they are and will be used and the concept of automata computing will become more common starting in the HPC space first.”

Even if many aren’t immediately familiar with automata, it’s Micron’s hope that its processor will drive recognition of this processor type into the mainstream—and hopefully directly into the laps of big government, life sciences and other companies in need of high performance large-scale data processing.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although the HPC server market had been facing a 6.7 percent COVID-re Read more…

Finland’s CSC Chronicles the COVID Research Performed on Its ‘Puhti’ Supercomputer

May 11, 2021

CSC, Finland’s IT Center for Science, is home to a variety of computing resources, including the 1.7 petaflops Puhti supercomputer. The 682-node, Intel Cascade Lake-powered system, which places about halfway down the T Read more…

IBM Debuts Qiskit Runtime for Quantum Computing; Reports Dramatic Speed-up

May 11, 2021

In conjunction with its virtual Think event, IBM today introduced an enhanced Qiskit Runtime Software for quantum computing, which it says demonstrated 120x speedup in simulating molecules. Qiskit is IBM’s quantum soft Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base clock of 2.0GHz – implemented in HPE's single-socket ProLian Read more…

Supercomputer Research Tracks the Loss of the World’s Glaciers

May 7, 2021

British Columbia – which is over twice the size of California – contains around 17,000 glaciers that cover three percent of its landmass. These glaciers are crucial for the Canadian province, which relies on its many Read more…

AWS Solution Channel

FLYING WHALES runs CFD workloads 15 times faster on AWS

FLYING WHALES is a French startup that is developing a 60-ton payload cargo airship for the heavy lift and outsize cargo market. The project was born out of France’s ambition to provide efficient, environmentally friendly transportation for collecting wood in remote areas. Read more…

Meet Dell’s Pete Manca, an HPCwire Person to Watch in 2021

May 7, 2021

Pete Manca heads up Dell's newly formed HPC and AI leadership group. As senior vice president of the integrated solutions engineering team, he is focused on custom design, technology alliances, high-performance computing Read more…

Hyperion: HPC Server Market Ekes 1 Percent Gain in 2020, Storage Poised for ‘Tipping Point’

May 12, 2021

The HPC User Forum meeting taking place virtually this week (May 11-13) kicked off with Hyperion Research’s market update, covering the 2020 period. Although Read more…

IBM Debuts Qiskit Runtime for Quantum Computing; Reports Dramatic Speed-up

May 11, 2021

In conjunction with its virtual Think event, IBM today introduced an enhanced Qiskit Runtime Software for quantum computing, which it says demonstrated 120x spe Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Fast Pass Through (Some of) the Quantum Landscape with ORNL’s Raphael Pooser

May 7, 2021

In a rather remarkable way, and despite the frequent hype, the behind-the-scenes work of developing quantum computing has dramatically accelerated in the past f Read more…

IBM Research Debuts 2nm Test Chip with 50 Billion Transistors

May 6, 2021

IBM Research today announced the successful prototyping of the world's first 2 nanometer chip, fabricated with silicon nanosheet technology on a standard 300mm Read more…

LRZ Announces New Phase of SuperMUC-NG Supercomputer with Intel’s ‘Ponte Vecchio’ GPU

May 5, 2021

At the Leibniz Supercomputing Centre (LRZ) in München, Germany – one of the constituent centers of the Gauss Centre for Supercomputing (GCS) – the SuperMUC Read more…

Crystal Ball Gazing at Nvidia: R&D Chief Bill Dally Talks Targets and Approach

May 4, 2021

There’s no quibbling with Nvidia’s success. Entrenched atop the GPU market, Nvidia has ridden its own inventiveness and growing demand for accelerated computing to meet the needs of HPC and AI. Recently it embarked on an ambitious expansion by acquiring Mellanox (interconnect)... Read more…

Intel Invests $3.5 Billion in New Mexico Fab to Focus on Foveros Packaging Technology

May 3, 2021

Intel announced it is investing $3.5 billion in its Rio Rancho, New Mexico, facility to support its advanced 3D manufacturing and packaging technology, Foveros. Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire