Numascale Delivers Shared Memory Systems at Cluster Price with Virtually Unlimited Number of Cores and Memory

By Nicole Hemsoth

August 19, 2013

HPC Architectures

Current computer architectures have developed along two different branches, one with distributed memory with separate address domains for each node with message passing programming model and another with global shared memory with a common physical address domain for the whole system. The first category is present in massively parallel processors (MPPs) and clusters and the latter is present in the common servers, workstations, personal computers and symmetrical multiprocessing systems (SMPs) through multicore and multi-socket implementations. These two architectures represent distinctly different programming paradigms. The first one (MPP) requires programs that are explicitly written for message passing between processes where each process only has access to its local data. The second category (SMP) can be programmed by multithreading techniques with global access to all data from all processes and processors. The latter represents a simpler model that requires less code and it is also fully equivalent with the architecture and programming model in common workstations and personal computers used by all programmers every day.

Since clusters are composed of general purpose multicore/multisocket processing nodes, these represent a form of a hybrid of the two different architectures described above.

Numascale’s approach to scalable shared memory

Numascale’s NumaConnect extends the SMP programming model to be scaled up by connecting a larger amount of standard servers (up to 4096 with the current implementation) as one global shared memory system (GSM). Such a system provides the same easy-to-use environment as a common workstation, but with the added capacity of a very large shared physical address space and I/O all controlled by a single image operating system. This means that programmers can enjoy the same working environment as their favorite workstation and system administrators have only one system to relate to instead of a bunch of individual nodes found in a cluster. Besides, the SMP model also allows efficient execution of message passing (MPI) programs by using shared memory as communication channel between processes.

Distributed vs shared memory

In distributed memory systems (clusters and MPPs), the different processors residing on different nodes in the system have no direct access to each other’s memories (or I/O space). Data on a different node cannot be referenced directly by the programmer through a variable name like it can in a shared memory architecture. This means that data to be shared or communicated between those processes must be accessed through explisit programming by sending the data over a network. This is normally done through calls to a message passing library (like MPI) that invokes a software driver to perform the data transfer. The data to be sent was (most probably) produced by the sending process and such it resides in one of the caches belonging to the processor that runs the process. This will normally be the case since most MPI programs tend to communicate through relatively short messages in the order of a few bytes per message. The communication library will need to copy the data to a system send buffer and call the routine to setup a DMA transfer by the network adapter that in turn will request the data from memory and transfer it to a buffer on the receiving node. All-in all this requires a number of transactions across system datapaths as depicted in figure Figure 1.

 Message passing with traditional network technology, showing sending side only

Figure 1, Message passing with traditional network technology, showing sending side only

In a shared memory machine, referencing any variable anywhere in the entire dataset is accomplished though a single standard load register instruction. For the programmer, this is utterly simple compared to the task of writing the explisit MPI calls necessary to perform the same task.

The same operation for sending data in the case of running a message passing (MPI) program on a shared memory system only requires the sender to execute a single store instruction (preferably a non-polluting store instruction to avoid local cache pollution) to send up to 16 bytes (this is the maximum amount of data for a single instruction store in the x86 instruction set as of today). The data will be sent to an address that is pointing to the right location in the memory of the remote node as indicated in figure Figure 2.

Message Passing with shared memory, both sender and receiver shown
 

Figure 2, Message Passing with shared memory, both sender and receiver shown

Numascale’s technology is applicable for applications with requirements for memory and processors that exceed the amount available in a single commodity unit. Applications for servers that can benefit from NumaConnect span from HPC applications with requirements for 10-20TBytes of main memory for seismic data processing with advanced algorithms through applications in life sciences to Big Data analytics.

Deployment

Numa systems are available from system integrators world-wide based on the IBMx3755 server system and Supermicro 1042 or 2042 servers. Numascale operates a demo system where potential customers can run their tests. See Numascale website http://numascale.com for details, the request form for access to the demo system is http://numascale.com/numa_access.php.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Takeaways from the Milwaukee HPC User Forum

September 19, 2017

Milwaukee’s elegant Pfister Hotel hosted approximately 100 attendees for the 66th HPC User Forum (September 5-7, 2017). In the original home city of Pabst Blue Ribbon and Harley Davidson motorcycles the agenda addresse Read more…

By Merle Giles

NSF Awards $10M to Extend Chameleon Cloud Testbed Project

September 19, 2017

The National Science Foundation has awarded a second phase, $10 million grant to the Chameleon cloud computing testbed project led by University of Chicago with partners at the Texas Advanced Computing Center (TACC), Ren Read more…

By John Russell

NERSC Simulations Shed Light on Fusion Reaction Turbulence

September 19, 2017

Understanding fusion reactions in detail – particularly plasma turbulence – is critical to the effort to bring fusion power to reality. Recent work including roughly 70 million hours of compute time at the National E Read more…

HPE Extreme Performance Solutions

HPE Prepares Customers for Success with the HPC Software Portfolio

High performance computing (HPC) software is key to harnessing the full power of HPC environments. Development and management tools enable IT departments to streamline installation and maintenance of their systems as well as create, optimize, and run their HPC applications. Read more…

Kathy Yelick Charts the Promise and Progress of Exascale Science

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakthrough Science at the Exascale” at the ACM Europe Conferen Read more…

By Tiffany Trader

Takeaways from the Milwaukee HPC User Forum

September 19, 2017

Milwaukee’s elegant Pfister Hotel hosted approximately 100 attendees for the 66th HPC User Forum (September 5-7, 2017). In the original home city of Pabst Blu Read more…

By Merle Giles

Kathy Yelick Charts the Promise and Progress of Exascale Science

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakt Read more…

By Tiffany Trader

DARPA Pledges Another $300 Million for Post-Moore’s Readiness

September 14, 2017

The Defense Advanced Research Projects Agency (DARPA) launched a giant funding effort to ensure the United States can sustain the pace of electronic innovation vital to both a flourishing economy and a secure military. Under the banner of the Electronics Resurgence Initiative (ERI), some $500-$800 million will be invested in post-Moore’s Law technologies. Read more…

By Tiffany Trader

IBM Breaks Ground for Complex Quantum Chemistry

September 14, 2017

IBM has reported the use of a novel algorithm to simulate BeH2 (beryllium-hydride) on a quantum computer. This is the largest molecule so far simulated on a quantum computer. The technique, which used six qubits of a seven-qubit system, is an important step forward and may suggest an approach to simulating ever larger molecules. Read more…

By John Russell

Cubes, Culture, and a New Challenge: Trish Damkroger Talks about Life at Intel—and Why HPC Matters More Than Ever

September 13, 2017

Trish Damkroger wasn’t looking to change jobs when she attended SC15 in Austin, Texas. Capping a 15-year career within Department of Energy (DOE) laboratories, she was acting Associate Director for Computation at Lawrence Livermore National Laboratory (LLNL). Her mission was to equip the lab’s scientists and research partners with resources that would advance their cutting-edge work... Read more…

By Jan Rowell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

MIT-IBM Watson AI Lab Targets Algorithms, AI Physics

September 7, 2017

Investment continues to flow into artificial intelligence research, especially in key areas such as AI algorithms that promise to move the technology from speci Read more…

By George Leopold

Need Data Science CyberInfrastructure? Check with RENCI’s xDCI Concierge

September 6, 2017

For about a year the Renaissance Computing Institute (RENCI) has been assembling best practices and open source components around data-driven scientific researc Read more…

By John Russell

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

Leading Solution Providers

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

GlobalFoundries: 7nm Chips Coming in 2018, EUV in 2019

June 13, 2017

GlobalFoundries has formally announced that its 7nm technology is ready for customer engagement with product tape outs expected for the first half of 2018. The Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This