Visit additional Tabor Communication Publications
December 21, 2007
The LS-1 for secure environments allows users to share compute power across HPC projects requiring high levels of data security
SALT LAKE CITY, Dec. 17 -- LNXI (formerly Linux Networx, Inc.), an award-winning provider of Linux-based, production-ready High Performance Computing (HPC) solutions, is pleased to announce its LS-1 for Secure Environments (LS-1 SE), a diskless, clustered HPC system optimized for projects in which proper data destruction is essential. LNXI's innovative, elegant solution offers users a way to pool compute power for various secure environment projects without having to destroy entire systems to ensure all project-related data has been eliminated. The LS-1 SE solution makes high performance computing in secure environments simpler, cheaper and faster than ever before. This further enables organizations to reach their mission critical goals faster.
In the old way of carrying out HPC projects in secure environments, systems were comprised of racks that housed the compute nodes, accessible non-volatile storage disks and the master node together. This meant that, in order to repurpose a system after the completion of a secure project, an organization would either have to fully prove that data had been clearly, completely destroyed on the system's non-volatile storage and memory -- or destroy the entire system at a potential cost of millions of dollars.
The LS-1 SE, however, solves this problem by physically separating out compute nodes from the system storage and the non-compute nodes that comprise system intelligence. This design makes certain that no memory resides on the user-accessible, non-volatile system components. While no hard drives reside on the Diskless Compute Nodes, there are components of system where data could theoretically be written. Therefore, LNXI offers a "scrubbing" solution that destroys all potentially remaining data that could still reside on BIOS chips and other RAM components.
By isolating compute nodes from user-accessible, non-volatile storage components, the LS-1 SE allows users to repurpose those compute resources after a secure environment project is completed, without having to destroy the entire system to carry out due diligence. The new HPC solution from LNXI offers organizations a way to save up to two-thirds of the cost potentially associated with data destruction.
The LS-1 SE solution also allows different secure environment HPC projects to be co-located, pooling compute power through the compute nodes while simultaneously protecting individual project data from contamination. By making sure no data passes between projects, the LS-1 SE eliminates the need to purchase separate HPC node systems in order to guarantee data security -- again adding value to existing systems and saving production costs.
"The old way of conducting HPC projects in secure environments was something akin to throwing out your car once you've completed your drive," said Jack Kenney, CEO of Linux Networx. "LNXI's new LS-1 SE solution allows users the ability to stretch their HPC resources, thereby saving dollars that could be thrown back into product development and profit. The solution to secure environment data destruction offered by the LS-1 SE now allows you to simply change your tires and continue on to another journey, instead of throwing out the whole car."
All LNXI LS-1 HPC solutions are developed from a comprehensive process for cluster creation which ensures not only that its systems are up and running the day they arrive on site, but also that they reach their peak performance. Called Validated Performance Engineering, or VPE, this continuous solution development process for the HPC market delivers a stable, productive Linux cluster, installed and benchmarked on site by LNXI professionals.
LNXI provides comprehensive high-performance computing (HPC) solutions that enable companies in the aerospace, automotive, government, heavy manufacturing and oil and gas sectors to reach their business-critical goals faster. The LNXI focus is architecting, delivering and supporting HPC cluster solutions based on the customer's unique business and technical requirements. LNXI employs Validated Performance Engineering to ensure all system components are optimized and fully tested, resulting in a production-ready cluster that is up and running in hours -- not weeks. By accelerating the time to productive use and supporting customers throughout the product lifecycle, LNXI clusters enable greater scientific and engineering productivity and performance, faster innovation, and increased competitive advantage. With a customer base that includes industry leaders Boeing, BMW, DaimlerChrysler, AG, Audi, Caterpillar, John Deere, Total, Schlumberger, Shell, Los Alamos National Laboratory, Sandia National Laboratories, and the Departments of Defense, Energy and Agriculture, LNXI is a trusted provider of HPC solutions.
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.