Visit additional Tabor Communication Publications
April 13, 2009
Members of the HPC Advisory Council discuss the mission of the organization, its progress since it was founded in May of 2008, and how membership is impacting key companies and end users.
Gilad Shainer - Chair, HPC Advisory Council
Brian Sparks - Media Relations Director, HPC Advisory Council
Gautam Shah - CEO, Colfax International
Scot Schultz - Senior Strategic Alliance Manager, AMD
Peter Lillian - Senior Product Marketing Manager, Dell
We're sitting in sunny Sunnyvale, California, where I have just been given a tour of the new operations center in a very new facility that houses a lot of competing vendor technology, with a unified purpose. Around the table and on the phone I am joined by a cast of interesting characters. They are all at ease and joking with one another (telling me they are "intellectual" characters in addition to being interesting), playful and almost forgetful that I was in the room altogether. Their lightheartedness is liberating -- they actually had to remind each other, upon a few clearing of throat motions, that it was time to be serious. These were some of the busiest guys in HPC -- but you could tell they had a deep passion for their missions.
Snell: What is the overall mission of the Council?
Shainer: The Advisory Council's mission is to help bridge the gap between HPC usage, users and its potential and to bring system integrators and users the human expertise that is needed to understand, operate and optimize an HPC system. The council collectively brings the beneficial capabilities of HPC to help guide users and improve efficiencies for better research, education, innovation and product manufacturing; to provide application designers with the tools needed to enable parallel computing; to strengthen the qualification and integration of HPC system products; to enable new users to leverage from the benefits and years of experience of seasoned HPC experts; to allow companies to develop better products and services and be more competitive; and to support comprehensive and complete education foundations by providing information and equipment to allow students to learn about HPC. The other part of our charter is to look at upcoming technologies and facilitate discussions about future developments in and for the HPC market.
Snell: How many members are in the Council now?
Shainer: Currently there are 68 members from all areas of the HPC community, inclusive of participation from across the technology manufacturing and development ecosphere: processor, server, interconnect and storage vendors; system integrators; software management and tool developers; application providers; and strong and growing participation from end user customer sites (Fermi National Accelerator Laboratory, Lawrence Livermore National Laboratory, National Research Center for Intelligent Computing Systems (NCIC), Oak Ridge National Laboratory, Ohio State University, Schlumberger, Swiss National Supercomputing Centre CSCS, The Victorian Partnership for Advanced Computing, Virginia Tech University and others).
Snell: What are the Council's key initiatives?
Shainer: We provide four core competencies. The first is a set of best practices. Leveraging from the membership's experience and research, we are able to show users how to improve their total application performance and increase system and user level productivity. From this, we are creating best practices and guidelines for numerous applications across the HPC market. Currently these best practices include weather research, bioscience, quantum chemistry, oil & gas exploration, and automotive simulations. We will be releasing more results very soon including an analysis of computational fluid dynamics and the multiple applications required in that type of research environment, including engine and aircraft design. We will also be offering more cross-platform best practices for both Microsoft Windows and Open Source Linux environments. All of the works so far are downloadable at the Council Web site. We present many of these findings at major conferences.
We also offer a network of experts. Through our member organizations, we provide access to leading experts who provide significant guidance to each research endeavor and further extend themselves to help lead and guide the Council, the projects and their respective companies, as well as answer questions about HPC implementations, general usage or trends. User-based queries can be submitted through the Council Web site. Users can also subscribe and submit queries to the mailing list, which is shared with all of the supporting technology experts to help support incoming questions and provide deeper insight into the changing and growing user environment.
In addition, the council provides systems-level access to our cluster center. We host several types of high-performance systems through our technology center in Santa Clara, Calif. These systems are available to the members and to end users to perform dynamic, real-time testing for their own code, applications or products. The technology equipment currently housed at the center includes systems from AMD, Dell, Intel, Mellanox and Sun Microsystems. As the membership grows, additional systems will be integrated into the center and testing environment.
Finally, there are our educational efforts. Collectively, the membership is a huge champion of the top priorities for the HPC community and its initiatives, which include education, infrastructure and maintaining overall leadership in HPC, worldwide; each of which are tightly intertwined. In support of the success of our future generations, we actively provide the necessary resources and solicit contributions from our membership to enable ongoing education. Collaboration includes everything from developing and providing teaching labs to donating the technology itself. We bring these tools directly to schools and students to help excite and support the development of the up-and-coming generation of explorers, technologist and researchers.
As part of that example and with the help of our Council member Wolfram Research, we were able to donate a small Colfax-based cluster and the GridMathematica software to an advanced mathematics class at Torrey Pines High School in southern California. The results showed students what can be done with HPC and provided the class with a hands-on lab, which is fairly unique for most high school environments. This type of effort is also beneficial to the Council as we will also be able to leverage that model and student experience to develop additional curricula for future classes.
Shah: For Colfax International, the Council's educational initiative has two distinct benefits. First, we enable new users and students to learn how to benefit from HPC -- where in their future they will be able to develop and improve upon products design, shorten time to market, etc., and second to support the growth and success of school projects and university research efforts. The benefit to the Council membership is an opportunity to learn how to create better solutions -- in general and for specific usage models. The Torrey Pines example allowed us to gain the necessary feedback on how to create optimized solutions that are cost effective for universities and schools who have a great dependency on high-performance systems that must be balanced with efficiency, consuming the lowest power and generating virtually zero noise. Our ability to offer these customizable configurations to similar users, especially in a training environment, is a key factor in all of our development and delivery efforts.
Shainer: The results of our efforts are also presented at key conferences such as the Rice University workshop in March where we presented our findings for the Schlumberger Eclipse application. We were able to show how to use commodity-based clusters and improve the systems performance to maximize results around Eclipse-based workloads, which is one of the world's leading oil and gas application environments.
We also presented at the Linux Cluster Institute (LCI) International Conference. The conference was hosted in Boulder, Colo., and presented a great opportunity to feature the works of AMD, Dell, and Mellanox with the National Climate and Research (NCAR) organization for weather modeling on commodity-based systems. Our findings will also be presented at the IDC HPC User Forum in Virginia in late April; the International LS-DYNA user group conference in May in Austria; and the International Supercomputing Conference (ISC) in June, where we'll be demonstrating a 40Gbps ecosystem across the exhibitor floor.
Snell: Sounds like a lot of activity. What do members get out of participating in the HPC Advisory Council?
Schultz: What this participation provides to AMD is a comprehensive analysis of performance data coupled with new measurement focus around utilization metrics that we can quantify for HPC workloads. Today, any system can be designed with the fastest I/O or the fastest processor, but what users really want and need is a well-balanced platform to achieve optimal utilization. It's not just about how "fast" a job can be done, it is also about "how much" work can be done (simultaneously). The collaboration allows us a better understanding of how the various datacenter systems work together, and how we can optimize our own products to best support the overall goals of high-performance and productivity.
Lillian: Dell's participation in the advisory council enables us to further test and build upon our knowledge capture of HPC and keep at the forefront of the technology lifecycle changes and introductions. The Council provides access to industry-leading expertise and systems technology that compliment Dell's goals to deliver value-based, end-to-end solutions that offer the assurance of quality and completeness that customers expect.
Council participation allows us to continually improve our focus and refine our product development with access to a complete testbed environment to validate solutions under a user-based test scenario to give customers the tools they need, and to learn from the total experience to further improve our own product and delivery models. It is a forum where we are all equally able to extend our thought leadership -- with each other and with our customers and vice-versa.
Snell: What's in store for the Council in the future?
Shainer: We will continue to expand membership, research and best practices, Network of Experts, and education efforts all the while looking at new applications, tools and usage models to make businesses more productive and research more extensive and accessible. The three major areas we will explore include HPC-as-a-service, HPC in the cloud, and virtualization.
We want to explore the idea of HPC-as-a-service (HPCaaS) within a company. We want to explore how companies can expand their HPC system use and resources. Most companies have HPC systems for a dedicated purpose -- such as design engineering. What we want to be able to show is how those companies can actually take those systems and offer them up as additional resources and provide services to multiple applications.
We also plan to show how companies use two different applications at the same time that may actually complement each other even though they have different characterizations. For example, you have one application that requires very low latency while another is more bandwidth dependant. Workload analysis such as this example will help companies where HPC is not a "mainstay" discipline better the return on their investment by optimizing all available resources or by making better use of how applications and jobs are scheduled, managed, etc.
We're also looking at furthering our understanding of HPC in the cloud, what it means and how this can expand access to the broader HPC resources.
Finally, we're evaluating how HPC systems and users are leveraging and will leverage the virtualization phenomena -- to increase efficiency and improve workflow in a datacenter. Most HPC applications already fully utilize the physical system, but in this case, we're talking about using virtualization to manage and move workloads more effectively.
As we finished the interview, Gilad Shainer and Brian Sparks gave me a virtual tour of the HPC Advisory Council Web site. The technical content that is available there is very impressive and provides lots of information on how users can maximize their applications productivity and reduce their associate costs. They cover applications from weather research, automotive simulation, oil and gas, bioscience, CFD and more. All this is done together with the application providers or end-users. There are several other HPC organizations out there, but it is clearly noticeable that the HPC Advisory Council is unique by providing actual guidelines for users, and resources for users and application providers to develop and tune their code for best results. I cannot wait till the next update to see how far they will go.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.