Visit additional Tabor Communication Publications
November 12, 2012
FREMONT, Calif. & SALT LAKE CITY, Nov. 12 – Penguin Computing, experts in high performance computing (HPC) solutions, today announced the immediate availability of PowerInsight, a new product for monitoring server and desktop power consumption.
PowerInsight was designed by Penguin Computing in close cooperation with Sandia National Laboratories. It is a compact device that measures power consumption for all subsystem components within a server. The product includes an ARM-based server, a custom carrier board and kits of sensors and connectors for the various power rails within a system. All within a form factor small enough to fit a 3.5’’ drive tray. Examples of subsystems that can be monitored are CPU, Memory, hard drives, GPU, and fans. Additional subsystem monitoring devices are planned.
“In many data centers power is the biggest facility challenge,” says Phil Pokorny, CTO at Penguin Computing. “While tools that measure system level power consumption are a commodity, affordable, easy-to-use tools for obtaining power information at the subsystem level are virtually non-existent. We are the first company on the market that offers a solution that allows researchers, programmers and system manufacturers to gain more insight into the power consumption of their systems and the implications of their design decisions.”
With its affordability and ease-of-use PowerInsight also enables the notion of ‘power aware programming’. Programmers that are faced with a choice of algorithms for solving a specific problem can now use PowerInsight to make choices that consider power at the subsystem level as well as simple benchmark performance.
“Our intent is for PowerInsight to further our understanding of energy usage down to the component level,” says James Laros, Principal Member of Technical Staff at Sandia National Laboratories. “Our experiments will try to empirically measure the effect -- on both performance and energy -- of hardware adjustments like CPU frequency scaling, and of software changes that involve algorithm modifications.”
A demo of PowerInsight will be shown in Penguin Computing’s booth 1217 at SC’12 in Salt Lake City, please visit www.penguincomputing.com for more information.
About Penguin Computing
For well over a decade Penguin Computing has been dedicated to delivering complete, integrated Enterprise and High Performance Computing (HPC) solutions that are innovative, cost effective, and easy to use. Penguin offers a complete end-to-end portfolio of products and solutions including workstations, rackmount servers, custom server designs, power efficient rack solutions and turnkey clusters. Penguin also offers the Scyld suite of software products for efficient provisioning and infrastructure monitoring. Additionally, Penguin Computing on Demand (POD), is a public HPC cloud that is available instantly and as needed. Penguin counts some of the world’s most demanding organizations as its customers, including AOL, Yelp, Caterpillar, Life Technologies, Dolby, Lockheed Martin and the US Air Force.
Source: Penguin Computing
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.