Today’s enterprises are faced with an ever-growing volume of data that contains immense value and intelligence for those who can properly collect, process, and store it. There is an insatiable desire to leverage this wealth of information to improve operational processes, increase customer understanding, and drive competitiveness. However, modern computing systems are hitting a wall in terms of their ability to process massive amounts of data quickly and efficiently. New approaches to system design, such as memory-driven computing, will be a key part of the technological transformation that is required in order to unleash the next generation of powerful computing systems.
Many industry estimates predict that the size of the digital universe will at least double every year through 2020, and there is no question that these volumes of information contain a multitude of actionable insight just waiting to be uncovered through proper analysis. Data is already rapidly advancing modern breakthroughs in areas like precision medicine, scientific research, national security, and industrial automation.
High performance computing (HPC) allows us to quickly process and extract insight from large volumes of information. These powerful systems help convert data into answers to some of the greatest challenges facing the world today, and support vital research activities through computer modeling, simulation, and analysis. For decades, modern computers have been constructed using the same set of fundamental principles; however, as the industry works toward the next generation of computing systems, we are encountering a new set of tremendous technical challenges.
The challenge we’re facing now is that as data rapidly expands, our ambitions for using this data are growing faster than our computing abilities. An entirely new approach to computing – specifically designed for the big data era – as well as a fundamental rethinking of the way modern computers are designed and built will be required in order to unleash the next generation of HPC. Hewlett Packard Enterprise (HPE) has been hard at work redefining these principles for years now. A few years ago, HPE began an extensive and ambitious research project aimed at building new prototypes that put memory – not processing – at the core of the system.
Today’s processors compute using data which is stored in relatively small amounts of memory that are tethered to each processor. When the processor is ready for more data, it must retrieve it from a larger, slower storage system, which causes huge inefficiencies and increased latency particularly when working with large datasets. HPE’s answer is an approach called Memory-Driven Computing, where every processor in a system has access to a giant shared pool of memory. This approach untethers memory from each processor, and instead places a nearly limitless pool of memory at the core of the system.
Memory-driven computing will fundamentally change the way the industry designs and builds supercomputers, which will undoubtedly disrupt a variety of industries. Here are just a few examples:
- Healthcare – Memory-driven computing will help doctors and researchers synthesize a dizzying amount of patient information – such as medical records, family health history, lifestyle data, environmental factors, and genomic data – to arrive at useful clinical insights, learn more about the origins of a particular disease, and develop personalized treatment plans.
- Retail – Greater speed and efficiency will enable retailers to pinpoint a shopper’s needs, tastes, and budget more accurately, and deliver actionable sales insights quickly enough to capture attention and engage shoppers.
- Transportation – The ability to analyze larger datasets will allow transportation systems to predict bottlenecks faster and initiate responses to delays that would help alleviate problems.
Memory-driven computing eliminates the inefficiencies of today’s memory, storage, and processing systems to reduce the time needed to perform complex calculations, enable larger simulations, and deliver a quantum leap forward in performance and efficiency. Achieving these productivity gains will be a critical effort if we are to achieve exascale computing by 2023. HPE will continue to push the limits of computing along with their network of partners, such as Intel, who are also hard at work on memory-centric technologies that are geared to address the current shortcomings and inefficiencies of modern computing and memory systems.
Memory-driven computing is set to have a huge effect on not just the next level of computing, but on everyday life as well. To learn more about the game-changing possibilities of memory-driven computing, I invite you to follow me on Twitter at @seidleHPC. You can also check out @HPE_HPC for all the latest news and updates for the HPC industry.