Visit additional Tabor Communication Publications
November 11, 2010
New offering provides unparalleled security, control, and convenience for HPC in the cloud users
FREMONT, Calif., Nov. 11 -- Penguin Computing, experts in high performance computing solutions, today announced immediate availability of its new Disk2Server data management solution for Penguin Computing on Demand (POD) customers. Disk2Server allows the sending of disk drives directly to and from a user managed storage server in the cloud which provides data management and protection capabilities that greatly enhance both HPC cloud workflow and security.
Disk2Server controls how critical data is moved on and off the HPC cloud resource. Using a combination storage/login server, a POD user can manage up to 96 TB of local data. User's data is stored directly on their storage/login server instead of shared network storage, providing the convenience of having immediate access to the data plus the security of data that is completely contained within a user controlled server.
"Penguin Computing's Disk2Server solution allows us to easily ship terabytes of data from our collection vehicles deployed across the world directly to the POD facility for processing with very minimal downtime," said John Ristevski, founder, CTO and co-chief executive officer of Earthmine.
Physical disks can be shipped to the POD datacenters from anywhere in the world. Once at the datacenter, the disks are installed into the user's server, giving the user immediate access to large volumes of data without the delays of copying data or streaming over the Internet. Users have the control and convenience while still enjoying the flexibility of an on-demand Linux cluster infrastructure optimized for HPC workloads.
"Penguin continues to respond to the growing needs of our POD customers and our new Disk2Server offering is just the latest example of how Penguin is focusing on enhancing HPC cloud workflow and data security. The ability to move terabytes of data with user-controlled encryption to and from a user-managed storage server gives POD users a new level of convenience and confidence," said Tom Coull, senior vice president and general manager for software and services at Penguin Computing.
For more information about the new Disk2Server product on Penguin Computing's POD, contact Penguin Computing at 1-855-884-8477 or visit http://www.penguincomputing.com/pod.
About Penguin Computing
Penguin Computing is a global leader in high-performance computing (HPC), delivering complete, integrated HPC solutions, from the workstation to the cloud. With a focus on cutting-edge technology, ease-of-use and exceptional customer service, Penguin cost-effectively meets the needs of the world's most demanding HPC users, including Caterpillar, Lockheed Martin, the U.S. Air Force, and the U.S. Navy. Today, Penguin delivers a range of solutions, from massive Linux clusters to Penguin Computing on Demand (POD), a service that provides a complete HPC solution in the cloud. Penguin has been an innovator in HPC solutions for over a decade, and the company's founder Donald Becker is recognized as the "Father of Linux Clustering." For more information about Penguin Computing and Penguin products, go to http://www.penguincomputing.com.
Source: Penguin Computing Inc.
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.