Visit additional Tabor Communication Publications
October 02, 2008
Here's a collection of highlights, selected totally subjectively, from this week's HPC news stream as reported at insideHPC.com and HPCwire.
>>10 words and a link
SiCortex announces new deskside, responds to CX1;
NY investment to create High Performance Computation Consortium;
HPC career outlook: share with an undergrad you know;
Congress says DARPA can't "responsibly manage" requested FY09 increase;
Video tour of new Intel datacenter;
TACC Ranger video tour;
MPI standards update first since 1997, more coming;
Star-P coming to Windows;
Craig Mundie outlines the future of computing;
Integrating CUDA and Visual C++;
Sun releases GridEngine beginner's guide;
Stratus markets high availability solution to Win HPC audience;
Russian Top50 HPC list out;
Allinea releases debugging tool for MS Visual Studio;
>>Google datacenters: good, better, best
This just in from Twitter (hi, @datacenter), an article at Data Center Knowledge posted today with details of Google's datacenter energy use:
Google today disclosed details of its data center energy usage, confirming that it operates some of the most efficient facilities in the world. Google said it is averaging a Power Usage Efficiency (PUE) rating of 1.21 across its six company-built data centers, and one of its facilities is operating with a PUE of 1.13, the lowest ever published and just above the "perfect" efficiency score of 1.0.
According to the article, a typical datacenter PUE is 2.0, and ..."the lowest claim in our travels here at DCK is a PUE of 1.28 for Sun's data center in Santa Clara, Calif."
"We reduced the energy-weighted average overhead across all Google-built data centers to 21% versus the average of 96% reported by the EPA," Google says in a new section of its web site dedicated to data center efficiency. "In other words, compared to standard data centers we've reduced the overhead by more than fourfold. To our knowledge, no other large-scale production data center has ever operated as efficiently. In fact, one of our data centers is running at an even lower overhead of only 15%, a sixfold improvement in efficiency."
Interesting, interesting article with a peak inside a traditionally very secretive tech company. If you are responsible for a datacenter, you need to read it.
>>Tech on display at the High Productivity HPC event in Houston
This week Cluster Resources announced their participation in High Productivity HPC, Oct. 2 at PCPC Direct's headquarters in Houston, Texas. Other partners in the event include HP, Microsoft, and PCPC Direct -- a provider of high-performance technology solutions in specialized sectors like oil and gas.
Cluster Resources will demonstrate the Moab Hybrid Cluster -- an HPC solution that dynamically changes cluster servers between Linux and Windows based on workload, defined policies, and application needs -- on PCPC's hybrid boot cluster.
"The Moab Hybrid Cluster solution overcomes the capacity-planning problem inherent when estimating the number of servers allocated to different OS resource pools," stated Michael Jackson, president of Cluster Resources. "Traditionally, these static resource pools have different peak usage times, where one OS remains idle while another has a backlog of workload. The hybrid model breaks down OS resource silos, letting OS pools grow and shrink dynamically to take advantage of otherwise idle compute resources. It also intelligently overcomes hardware and job failures by reallocating compute nodes with the proper OS to compensate for the failures."
HP will also be showing its workgroup system, about which we haven't heard in a while:
High Productivity HPC will also showcase the affordable, powerful, and easy-to-use HP Cluster Platform Workgroup System (CPWS). This HPC cluster solution is based on the compact HP BladeSystem c3000 enclosure, which is designed for smaller technology sites, branch offices, and remote locations. Powered by industry-leading HP BladeSystem c-Class servers, CPWS is easily configured, ordered, and deployed without special power or cooling. This HPC platform enables midsize customers to drive new levels of innovation and productivity while still managing overhead costs.
Admission to the event is free; for info or registration email email@example.com.
>>New Red Hat Linux for HPC
Linux Today carried news this week of a forthcoming HPC distro of everyone's favorite OS. This distro marks a change from the previous Red Hat approach to HPC, as noted by the Inquirer:
The current offerings, Red Hat Enterprise Linux HPC Compute Node subscription, are based on the standard distro but tailored for compute nodes running HPC workloads. The subscription is available for HPC compute nodes used in clusters with four or more systems. Users access compute nodes via head node servers, which of course use the more expensive Red Hat Enterprise Linux ES or AS.
Instead, the upcoming Red Hat HPC is actually a fully integrated software stack using combined technologies from Red Hat and Platform Computing.
Red Hat released the HPC solution on Oct. 2 (five days sooner than was anticipated by Linux Today). From the announcement:
The Red Hat HPC Solution incorporates all of the components necessary to deploy and maintain HPC clusters, including Red Hat Enterprise Linux 5.2, the world's leading open source operating system, and Platform Computing's cluster software framework, Platform Open Cluster Stack 5. The solution also includes device drivers, a simple cluster installer, cluster management tools, a resource and application monitor, interconnect support and Platform Lava, a powerful job scheduler.
The integration with Platform follows up on the announcement made last November that the two companies would start working more closely together in HPC.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.