Visit additional Tabor Communication Publications
December 21, 2007
The LS-1 for secure environments allows users to share compute power across HPC projects requiring high levels of data security
SALT LAKE CITY, Dec. 17 -- LNXI (formerly Linux Networx, Inc.), an award-winning provider of Linux-based, production-ready High Performance Computing (HPC) solutions, is pleased to announce its LS-1 for Secure Environments (LS-1 SE), a diskless, clustered HPC system optimized for projects in which proper data destruction is essential. LNXI's innovative, elegant solution offers users a way to pool compute power for various secure environment projects without having to destroy entire systems to ensure all project-related data has been eliminated. The LS-1 SE solution makes high performance computing in secure environments simpler, cheaper and faster than ever before. This further enables organizations to reach their mission critical goals faster.
In the old way of carrying out HPC projects in secure environments, systems were comprised of racks that housed the compute nodes, accessible non-volatile storage disks and the master node together. This meant that, in order to repurpose a system after the completion of a secure project, an organization would either have to fully prove that data had been clearly, completely destroyed on the system's non-volatile storage and memory -- or destroy the entire system at a potential cost of millions of dollars.
The LS-1 SE, however, solves this problem by physically separating out compute nodes from the system storage and the non-compute nodes that comprise system intelligence. This design makes certain that no memory resides on the user-accessible, non-volatile system components. While no hard drives reside on the Diskless Compute Nodes, there are components of system where data could theoretically be written. Therefore, LNXI offers a "scrubbing" solution that destroys all potentially remaining data that could still reside on BIOS chips and other RAM components.
By isolating compute nodes from user-accessible, non-volatile storage components, the LS-1 SE allows users to repurpose those compute resources after a secure environment project is completed, without having to destroy the entire system to carry out due diligence. The new HPC solution from LNXI offers organizations a way to save up to two-thirds of the cost potentially associated with data destruction.
The LS-1 SE solution also allows different secure environment HPC projects to be co-located, pooling compute power through the compute nodes while simultaneously protecting individual project data from contamination. By making sure no data passes between projects, the LS-1 SE eliminates the need to purchase separate HPC node systems in order to guarantee data security -- again adding value to existing systems and saving production costs.
"The old way of conducting HPC projects in secure environments was something akin to throwing out your car once you've completed your drive," said Jack Kenney, CEO of Linux Networx. "LNXI's new LS-1 SE solution allows users the ability to stretch their HPC resources, thereby saving dollars that could be thrown back into product development and profit. The solution to secure environment data destruction offered by the LS-1 SE now allows you to simply change your tires and continue on to another journey, instead of throwing out the whole car."
All LNXI LS-1 HPC solutions are developed from a comprehensive process for cluster creation which ensures not only that its systems are up and running the day they arrive on site, but also that they reach their peak performance. Called Validated Performance Engineering, or VPE, this continuous solution development process for the HPC market delivers a stable, productive Linux cluster, installed and benchmarked on site by LNXI professionals.
LNXI provides comprehensive high-performance computing (HPC) solutions that enable companies in the aerospace, automotive, government, heavy manufacturing and oil and gas sectors to reach their business-critical goals faster. The LNXI focus is architecting, delivering and supporting HPC cluster solutions based on the customer's unique business and technical requirements. LNXI employs Validated Performance Engineering to ensure all system components are optimized and fully tested, resulting in a production-ready cluster that is up and running in hours -- not weeks. By accelerating the time to productive use and supporting customers throughout the product lifecycle, LNXI clusters enable greater scientific and engineering productivity and performance, faster innovation, and increased competitive advantage. With a customer base that includes industry leaders Boeing, BMW, DaimlerChrysler, AG, Audi, Caterpillar, John Deere, Total, Schlumberger, Shell, Los Alamos National Laboratory, Sandia National Laboratories, and the Departments of Defense, Energy and Agriculture, LNXI is a trusted provider of HPC solutions.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.