The Shared Hierarchical Academic Research Computing Network (SHARCNET) is a collaboration of 14 universities, colleges and research institutes in southern Ontario, using high performance computing for research and innovation. Structured as a “cluster of clusters” and linked by a fiber optic network, SHARCNET is designed to provide a platform for world-class computational challenges as well as link academic researchers with corporate partners to develop new business opportunities.
Established in June, 2001, SHARCNET is the culmination of the vision of faculty from McMaster University (Hugh Couchman), The University of Western Ontario (Peter Poole, Allan MacIsaac, Mike Bauer, Nils Petersen) and the University of Guelph (Deborah Stacey). They developed the successful grant applications to the Canadian Foundation for Innovation, Ontario Innovation Trust and Ontario Research and Development Challenge Fund, with a combined budget of $42 million. The money was used to build the high performance computing infrastructure across the original SHARCNET institutions, as well as fund a number of research programs — Chairs and Fellowships — which are unique among Canadian HPC centers.
SHARCNET is further supported by an academic-industrial alliance. Its private sector partners include Hewlett-Packard, Platform Computing, Bell Canada, Nortel Networks, Quadrics Ltd, SGI, and the Optical Regional Advanced Network of Ontario.
As the Scientific Director of SHARCNET, Hugh Couchman, is responsible for coordinating activities and developing policy initiatives. HPCwire got a chance to talk with him about SHARCNET and his vision for the organization, in the context of Canadian HPC.
HPCwire: Like many industrialized countries, Canada has realized the strategic importance of high performance computing to drive 21st century economic development. What is the Canadian government's plan for investing in HPC resources?
Couchman: I would say that Canada is just at the beginning of this process. The long-range plan that we have just completed [see http://www.c3.ca/LRP/] laid out a strategy for the necessary ongoing investments and the community is now selling the plan to federal and provincial governments and funding agencies. There is a growing awareness of the importance of supporting the technology and the skilled personnel to operate and use it.
At present we have had the greatest success with the primary federal funding agency for large-scale research infrastructure projects, the Canada Foundation for Innovation (CFI) and its provincial matching counterparts. They have established a fund to support enabling platforms that serve the needs of many research disciplines; HPC was named as the first target for this fund. I think that this represents a very important first step in developing awareness of the need for ongoing and comprehensive support of HPC. We need to ensure that this sort of support is stable and that the requirements for operational and personnel costs are properly recognized.
HPCwire: How does SHARCNET fit into this plan? What is its mission?
Couchman: The plan recognized that the so-called mid-range of computational capability was well served in Canada by the seven “HPC consortia.” I would note that the plan also advocated a national center supporting a Top20 facility — but this has yet to be realized. SHARCNET, serving 14 universities and colleges in Ontario, is one of the two largest of these consortia.
Our mission is to provide comprehensive HPC resources to our research community to permit them to undertake forefront computational research. Our strategic model is based on a four-layered approach. The first three layers are fairly standard: we provide the hardware, the operational and systems-level support, and the user- and application-level support as is common at many centers. We also provide a fourth layer of programs that encourages the development of computational skills in the community in the belief that this is the best way to build discipline leaders and innovators. These programs provide bridge funding at our partners for computationally-skilled new faculty as well as a matching program that helps researchers bring in graduate students and postdoctoral fellows etc. This approach to building what we term an “HPC Culture” is unique in Canada.
HPCwire: What is the model for sharing SHARCNET resource? Is it a Grid strategy or something else?
Couchman: In a sense, SHARCNET is not really a “Grid.” We do have hardware distributed across a number of sites (four primary architecture-specific “resource centers” as well as a number “development” systems), but all are directly controlled by SHARCNET and there is a global approach to resource utilization. Indeed, the agreement governing the consortium explicitly states that all resources are equally available to all members independent of the location of researcher or resource. Further, since all resources are connected to a private, dedicated network we avoid many of the security issues associated with sharing across different ownership domains.
It is interesting that in many respects, because Canada was relatively late into the HPC game, we began right off the bat with a sharing model as a way for institutions to get access to resources that no one institution could hope to achieve on its own. We thus avoided many of the political difficulties associated with bringing together disparate established resources. The consortia themselves are now talking about generating some sort of cross-Canada HPC network and so many of the standard Grid issues are going to become quickly relevant.
HPCwire: How are SHARCNET's commercial partners contributing to its success?
Couchman: The CFI program is structured in such a way as to demand a matching contribution from our vendor partners and both of our primary suppliers of compute hardware, HP and SGI, have responded to this extremely positively. Beyond this, however, they have made significant contributions to the operation of SHARCNET and, in particular to the chairs and fellowships programs. We also have significant support from a number of other partners including Platform, Quadrics, Nortel and Bell. Of special significance is the relationship we have with the provincial research and education optical network, ORION [Ontario Research and Innovation Optical Network], which has allowed us to establish our dedicated network.
HPCwire: Describe the recent upgrade of infrastructure at SHARCNET. What kind of capabilities and research opportunities will this new infrastructure enable?
Couchman: We are installing a number of components. The four main clusters from HP range from a 384-processor system built of 4-way Opteron boxes each with 32 GB per node, to a 1000-processor “utility parallel” Myrinet-connected cluster, to a 1500-processor Quadrics “capability” cluster to a 3000-processor serial farm. The intent and need is to serve a very diverse research community — we have over 1200 users — requiring a diverse set of architectures to best serve their research needs. Roughly half of our cycles are spent on serial or highly parallel applications and the other half on true parallel applications ranging all the way from modestly parallel (8- to 64-way) to applications scaling out to several hundred processors. Each large cluster has some 70 TB of Lustre-based storage associated with it and a 200 TB archival storage provides the back end of a hierarchical storage capability. We are also consolidating all of our older Alpha equipment into a single cluster as a way of extending its useful life.
In addition to the primary resource clusters we have a 128 processor SGI Altix for threaded and OpenMP applications as well as development clusters at the remaining sites each of which is a 32-node, dual-socket, dual-core Myrinet-connected cluster. the idea is to provide a local development environment and platform for small production runs and provide a seamless route to migrate applications to the larger systems as necessary. All sites will have visualization capabilities as well as remote conferencing/collaboration facilities — through AccessGrid — to facilitate the human interactions for training, seminars and research collaborations.
HPCwire: What are the future plans for SHARCNET?
Couchman: The present installations will increase SHARCNET's capability by over an order of magnitude and our immediate focus is to get all of the new equipment into operation and serving users effectively as soon as possible. We are spending a lot of time building an environment that lets users access these systems effectively so that they can produce quick research results. These include web-based resource and job monitoring, common user environments and a robust, transparent access to the resources.
In the medium term we are making a strong push to develop a computational research community across the SHARCNET institutions through the use of common seminars, training and discipline focus groups. AccessGrid is a fundamental technical element for this effort, but we plan to leverage the various courses and expertise that are present at our partner institutions to build critical mass in a number of areas across the consortium.
Another key element will be an increasing effort to coordinate with the other consortia. There is already a strong sense that there is a great deal to be gained by sharing services and infrastructure across the country and the request from CFI for a national vision for HPC in Canada over the next three to five years strongly reinforces this. We are also interested in looking forward to the next generation of equipment that we will presumably have to install in four years time or so and all of the interesting requirements for machine room space, power etc. that one can project from current installations!