IntelliProp Unveils Its Omega Memory Fabric Chips

September 22, 2022

LONGMONT, Colo., Sept. 22, 2022 – IntelliProp, a leading innovator of composable data center transformation technology, has announced its intent to deliver its Omega Memory Fabric chips. The chips incorporate the Compute Express Link (CXL) Standard, along with IntelliProp’s innovative Fabric Management Software and Network Attached Memory (NAM) system. In addition, the company announced the availability of three field-programmable gate array (FPGA) solutions built with its Omega Memory Fabric.

The Omega Memory Fabric eliminates memory bottleneck and allows for dynamic allocation and sharing of memory across compute domains both in and out of the server, delivering on the promise of Composable Disaggregated Infrastructure (CDI) and rack scale architecture, an industry first. IntelliProp’s memory-agnostic innovation will lead to the adoption of composable memory and transform data center energy, performance, efficiency and cost.

As data continues to grow, database and AI applications are being constrained on memory bandwidth and capacity. At the same time billions of dollars are being wasted on stranded and unutilized memory. According to a recent Carnegie Mellon / Microsoft report, Google stated that average DRAM utilization in its datacenters is 40%, and Microsoft Azure said that 25% of its server DRAM is stranded.

“IntelliProp’s efforts in extending CXL connection beyond simple memory expansion demonstrates what is achievable in scaled out, composable data center resources,” said Jim Pappas, Chairman of the CXL Consortium. “Their advancements on both CXL and Gen-Z hardware and management software components has strengthened the CXL ecosystem.”

Experts agree that memory disaggregation increases memory utilization and reduces stranded or underutilized memory. Today’s remote direct memory access (RDMA)-based disaggregation has too much overhead for most workloads and virtualization solutions are unable to provide transparent latency management. The CXL standard offers low-overhead memory disaggregation and provides a platform to manage latency.

“History tends to repeat itself. NAS and SAN evolved to solve the problems of over/under storage utilization, performance bottlenecks and stranded storage. The same issues are  occuring with memory,” stated John Spiers, CEO, IntelliProp. “Our trailblazing approach to CXL technology unlocks memory bottlenecks and enables next-generation performance, scale and efficiency for database and AI applications. For the first time, high-bandwidth, petabyte-level memory can be deployed for vast in-memory datasets, minimizing data movement, speeding computation and greatly improving utilization. We firmly believe IntelliProp’s technology will drive disruption and transformation in the data center, and we intend to lead the adoption of composable memory.”

Omega Memory Fabric/ NAM System, Powered by IntelliProp’s ASIC

IntelliProp’s Omega Memory Fabric and Management Software enables the enterprise composability of memory, and CXL devices, including storage. Powered by IntelliProp’s ASIC, the Omega Memory Fabric based NAM System and software expands the connection and sharing of memory in and outside the server, placing memory pools where needed. The Omega NAM is well suited for AI, ML, big data, HPC, cloud and hyperscale / enterprise data center environments, specifically targeting applications requiring large amounts of memory.

“In a survey IDC completed in early 2022, almost half of enterprise respondents indicated that they anticipate memory-bound limitations for key enterprise applications over time,” said Eric Burgener, research vice president, Infrastructure Systems, Platforms and Technologies Group, IDC. “New memory pooling technologies like what IntelliProp is offering with their NAM system will help to address this concern, enabling dynamic allocation and sharing of memory across servers with high performance and without hardware slot limitations. The composable disaggregated infrastructure market that IntelliProp is playing in is an exciting new market that is expected to grow at a 28.2 percent five-year compound annual growth rate to crest at $4.8 billion by 2025.”

With IntelliProp’s Omega Memory Fabric and Management Software, hyperscale and enterprise customers will be able to take advantage of multiple tiers of memory with predetermined latency. The system will enable large memory pools to be placed where needed, allowing multiple servers to access the same dataset. It also allows new resources to be added with a simple hot plug, eliminating server downtime and rebooting for upgrades.

“IntelliProp is on to something big. CXL disaggregation is key, as half of the cost of a server is memory. With CXL disaggregation, they are taking memory sharing to a whole new level,” said Marc Staimer, Dragon Slayer analyst. “IntelliProp’s technology makes large pools of memory shareable between external systems. That has immense potential to boost data center performance and efficiency while reducing overall system costs.”

Omega Memory Fabric Features, Incorporating the CXL Standard

  • Scale and share memory outside the server
  • Dynamic multi-pathing and allocation of memory
  • E2E security using AES-XTS 256 w/ addition of integrity
  • Supports non-tree topologies for peer-to-peer
  • Direct path from GPU to memory
  • Management scaling for large deployments using multi-fabrics/ subnets and distributed managers
  • Direct memory access (DMA) allows data movement between memory tiers efficiently and without locking up CPU cores
  • Memory agnostic and up to10x faster than RDMA

“AI is one of the world’s most demanding applications, in terms of compute and storage. The prospects of using ML in genomics, for example, requires exascale compute and low latency access to petabytes of storage. The ability to dynamically allocate shareable pools of memory over the network and across compute domains is a feature we are very excited about,” says Nate Hayes, Co-Founder and Board Member at RISC AI. “We think the fabric from IntelliProp provides the latency, scale and composable disaggregated infrastructure for the next generation AI training platform we are developing at RISC AI, and this is why we are planning to integrate IntelliProp’s technology into the high performance RISC-V processors that we will be manufacturing.”

Omega Memory Fabric Solutions Bring Future CXL Advantages to Data Centers

IntelliProp unveiled three FPGA solutions as part of its Omega Fabric product suite. The solutions connect CXL devices to CXL hosts, allowing data centers to increase performance, scale across dozens to thousands of host nodes, consume less energy since data travels with fewer hops and enable mixed use of shared DRAM (fast memory) and shared SCM (slow memory), allowing for lower total cost of ownership (TCO).

Omega Memory Fabric Solutions

  • Omega Adapter
    • Enables the pooling and sharing of memory across servers
    • Connects to the IntelliProp NAM array
  • Omega Switch
    • Enables the connection of multiple NAM arrays to multiple servers through a switch
    • Targeted for large deployments of servers and memory pools
  • Omega Fabric Manager (open source)
    • Enables key fabric management capabilities:
      • End-to-end encryption over CXL to prevent applications from seeing the contents in other applications’ memory along with data integrity
      • Dynamic multi-pathing for redundancy in case links go down with automatic failover
      • Supports non-tree topologies for peer-to-peer for things like GPU-to-GPU computing and GPU direct path to memory
      • Enables Direct Memory Access for data movement between memory tiers without using the CPU

About IntelliProp

IntelliProp is a Colorado-based company founded in 1999 to provide ASIC design and verification services for the data storage and memory industry. Today, IntelliProp is leading the composable data center transformation, fundamentally changing the performance, efficiency and cost of data centers. IntelliProp continues to gain recognition as a leading expert in the data storage industry and actively participates in standards groups developing next-generation memory infrastructure.


Source: IntelliProp

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Microsoft Closes Confidential Computing Loop with AMD’s Milan Chip

September 22, 2022

Microsoft shared details on how it uses an AMD technology to secure artificial intelligence as it builds out a secure AI infrastructure in its Azure cloud service. Microsoft has a strong relationship with Nvidia, but is also working with AMD's Epyc chips (including the new 3D VCache series), MI Instinct accelerators, and also... Read more…

Nvidia Introduces New Ada Lovelace GPU Architecture, OVX Systems, Omniverse Cloud

September 20, 2022

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as the first computer programmer. The company also announced tw Read more…

Nvidia’s Hopper GPUs Enter ‘Full Production,’ DGXs Delayed Until Q1

September 20, 2022

Just about six months ago, Nvidia’s spring GTC event saw the announcement of its hotly anticipated Hopper GPU architecture. Now, the GPU giant is announcing that Hopper-generation GPUs (which promise greater energy eff Read more…

NeMo LLM Service: Nvidia’s First Cloud Service Makes AI Less Vague

September 20, 2022

Nvidia is trying to uncomplicate AI with a cloud service that makes AI and its many forms of computing less vague and more conversational. The NeMo LLM service, which Nvidia called its first cloud service, adds a layer of intelligence and interactivity... Read more…

AWS Solution Channel

Shutterstock 1194728515

Simulating 44-Qubit quantum circuits using AWS ParallelCluster

Dr. Fabio Baruffa, Sr. HPC & QC Solutions Architect
Dr. Pavel Lougovski, Pr. QC Research Scientist
Tyson Jones, Doctoral researcher, University of Oxford

Introduction

Currently, an enormous effort is underway to develop quantum computing hardware capable of scaling to hundreds, thousands, and even millions of physical (non-error-corrected) qubits. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1166887495

Improving Insurance Fraud Detection using AI Running on Cloud-based GPU-Accelerated Systems

Insurance is a highly regulated industry that is evolving as the industry faces changing customer expectations, massive amounts of data, and increased regulations. A major issue facing the industry is tracking insurance fraud. Read more…

Nvidia Targets Computers for Robots in the Surgery Rooms

September 20, 2022

Nvidia is laying the groundwork for a future in which humans and robots will be collaborators in the surgery rooms at hospitals. The company announced a computer called IGX for Medical Devices, which will be populated in robots, image scanners and other computers and medical devices involved in patient care close to the point... Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Nvidia Introduces New Ada Lovelace GPU Architecture, OVX Systems, Omniverse Cloud

September 20, 2022

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as Read more…

Nvidia’s Hopper GPUs Enter ‘Full Production,’ DGXs Delayed Until Q1

September 20, 2022

Just about six months ago, Nvidia’s spring GTC event saw the announcement of its hotly anticipated Hopper GPU architecture. Now, the GPU giant is announcing t Read more…

NeMo LLM Service: Nvidia’s First Cloud Service Makes AI Less Vague

September 20, 2022

Nvidia is trying to uncomplicate AI with a cloud service that makes AI and its many forms of computing less vague and more conversational. The NeMo LLM service, which Nvidia called its first cloud service, adds a layer of intelligence and interactivity... Read more…

Nvidia Targets Computers for Robots in the Surgery Rooms

September 20, 2022

Nvidia is laying the groundwork for a future in which humans and robots will be collaborators in the surgery rooms at hospitals. The company announced a computer called IGX for Medical Devices, which will be populated in robots, image scanners and other computers and medical devices involved in patient care close to the point... Read more…

Survey Results: PsiQuantum, ORNL, and D-Wave Tackle Benchmarking, Networking, and More

September 19, 2022

The are many issues in quantum computing today – among the more pressing are benchmarking, networking and development of hybrid classical-quantum approaches. Read more…

HPC + AI Wall Street to Feature ‘Spooky’ Science for Financial Services

September 18, 2022

Albert Einstein famously described quantum mechanics as "spooky action at a distance" due to the non-intuitive nature of superposition and quantum entangled par Read more…

Analog Chips Find a New Lease of Life in Artificial Intelligence

September 17, 2022

The need for speed is a hot topic among participants at this week’s AI Hardware Summit – larger AI language models, faster chips and more bandwidth for AI machines to make accurate predictions. But some hardware startups are taking a throwback approach for AI computing to counter the more-is-better... Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

The Final Frontier: US Has Its First Exascale Supercomputer

May 30, 2022

In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…

US Senate Passes CHIPS Act Temperature Check, but Challenges Linger

July 19, 2022

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

Top500: Exascale Is Officially Here with Debut of Frontier

May 30, 2022

The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

Leading Solution Providers

Contributors

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

Exclusive Inside Look at First US Exascale Supercomputer

July 1, 2022

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Intel Reiterates Plans to Merge CPU, GPU High-performance Chip Roadmaps

May 31, 2022

Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire