Visit additional Tabor Communication Publications
November 16, 2011
Patented HyperCloud and Planar-X technologies enable 768GB of DRAM server memory
IRVINE, Calif., Nov. 14 -- Netlist, Inc. (NASDAQ: NLST), a designer and manufacturer of high-performance memory subsystems, today announced the industry's first 32GB Virtual Dual Rank (2vR) RDIMM (registered dual inline memory module). The new HyperCloud module enables an unprecedented 768GB of RDIMM memory capacity in the next generation two-processor servers.
While the industry-at-large waits several years for the development of 8Gb monolithic DRAM to create this level of density, today Netlist is delivering the first 32GB 2vR RDIMM by using currently available 4Gb monolithic DRAM. At the center of this feat is HyperCloud's patented rank multiplication technology which replicates the functionality of a yet unavailable 8Gb DRAM with two 4Gb DRAMs. Netlist's innovative Planar-X technology further enables cost-effective packaging of 72 4Gb components into a standard RDIMM form-factor, producing the lowest-cost highest-density memory module in the market. HyperCloud modules are JEDEC compatible and plug into standard server memory slots.
"By delivering the industry's first 32GB 2vR RDIMM, we are able to offer our customers unprecedented memory capacity with efficient economics associated with widely available DRAM components," said Steve McClure, vice president of worldwide sales and marketing, Netlist. "The processing of large data sets at high CPU speeds requires large amounts of dynamic memory to avoid costly disk drive wait times. With the new 32GB HyperCloud, our OEM customers can maximize server utilization and application performance which translate into differentiated value for their products."
"Customers are looking for greater memory capacity and bandwidth for enhanced application performance," said Mike Gill, vice president of engineering, Industry Standard Servers and Software, HP. "HP ProLiant servers that use Netlist's HyperCloud memory products help customers achieve improved capacity and memory bandwidth needed for cloud computing, analytics, virtualization and high performance computing applications."
Netlist is showcasing its 32GB Virtual Dual Rank HyperCloud Planar-X RDIMM at SC11 in booth number 2938. SC11 is the International Conference for High Performance Computing, Networking, Storage and Analysis taking place in Seattle November 12-18; for more information, see the SC11 website: www.sc11.supercomputing.org.
Additional information on Netlist's 32GB Virtual Dual Rank HyperCloud Planar-X RDIMM can be found at www.netlist.com/hypercloud.
Netlist, Inc. designs and manufactures high-performance, logic-based memory subsystems for server and storage applications for cloud computing. Netlist's flagship products include HyperCloud, a patented memory technology that breaks traditional memory barriers, NVvault family of products that enables data retention during power interruption, EXPRESSvault, a PCI Express backup/recovery solution for cache data protection and a robust portfolio of high performance and specialty memory subsystems including HyperStream, VLP (very low profile) DIMMs and Planar-X RDIMMs.
Netlist develops technology solutions for customer applications in which high-speed, high-capacity, small form factor and heat dissipation are key requirements for system memory. These customers include OEMs that design and build tower servers, rack-mounted servers, blade servers, high-performance computing clusters, engineering workstations and telecommunications equipment. Founded in 2000, Netlist is headquartered in Irvine, CA with manufacturing facilities in Suzhou, People's Republic of China. Learn more at www.netlist.com.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.