Visit additional Tabor Communication Publications
December 16, 2005
Allinea Software, a supplier of high performance computing tools, has announced a seller agreement with IBM involving joint sales and marketing of Allinea's distributed computing products. The agreement will bring IBM customers Allinea's scalar and parallel toolkit, including its Distributed Debugging Tool (DDT) and its new Optimization and Profiling Tool (OPT). The tools are available on systems based on IBM Power Architecture technology, AMD Opteron, and Intel-based clusters and SMPs. They are designed to help customers develop scalable parallel applications that are able to use the growing number of CPUs delivered in modern HPC architectures. The agreement follows a number of joint successes where DDT and OPT have been chosen by IBM customers for their development requirements on IBM AIX and Linux supercomputing platforms.
"Today's computer systems offer significantly more processors than in the past, and with the trend towards multi-core microprocessors and large numbers of execution units on chip, the need for reliable and scalable parallel software is increasing. We believe that our powerful development tools can help IBM and its customers to harness the power that these systems provide," said Michael Rudgyard, CEO of Allinea Software. "We are very happy to be entering into this global sales agreement with IBM."
Allinea announced the availability of DDT for Linux on POWER at Supercomputing 2004, and shortly afterwards it released DDT for AIX. Since then, it has registered a growing numbers of successes on these platforms.
Leicester University was one of Allinea's first customers for Linux on POWER, and has an 108 processor cluster of SMPs that is used for Astrophysics simulations. "When we bought our OpenPower cluster from IBM, we needed an cross-platform parallel debugger that we could offer to our users," said Chris Rudge, Facility Manager for the UK Astrophysics Fluids Facility at Leicester University. "We found Allinea's DDT debugger both powerful and easy-to-use, and have been working closely with Allinea to help define requirements for their new OPT profiler."
"As one of Allinea's earliest customers for DDT, we have had the opportunity to see the tremendous improvements in the product's capabilities over the last two years. DDT is exactly what we are looking for in a parallel debugger: powerful, cross platform, yet easy to use on both workstations and clusters. So when we were planning to buy the Linux on Power cluster from IBM, we choose DDT for this platform: it was an easy choice" Alan Tackett, Technical Director of the Advanced Computing Center for Research and Education at Vanderbilt University said.
Institut Francais du Petrole (IFP), the French national Oil and Gas research center, which owns a cluster (AMD, Intel and IBM Power based) running under different flavors of Linux and AIX, required a software environment that supported their heterogeneous environment. "We wanted to get a universal powerful parallel solution and only Allinea could provide us with such a complete offer with the kind of reactivity we are wanting from a successful company" Stephane Requena, HPC Architect at IFP said.
IBM has a continued commitment to accelerating the adoption of cluster computing for large-scale computing applications," said Herb Schultz, Program Director, Deep Computing, IBM. "Working with providers such as Allinea, IBM can help customers to drive higher levels of performance from these solutions - ultimately increasing customers' ability to apply computational power to solve complex problems."
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.