Visit additional Tabor Communication Publications
July 30, 2012
July 29 -- A modular datacenter network (MDCN) is the key component in building mega-datacenters. Huang Feng, Li Dongsheng, and their group from the National Key Laboratory of Parallel and Distributed Processing, School of Computers, National University of Defense Technology present a novel hybrid intra-container network for a modular datacenter (MDC), called SCautz, together with a suite of routing protocols. SCautz is able to provide high network throughput for various traffic patterns, and achieves graceful performance degradation when failures occur and increase. Their work, entitled "SCautz: a high performance and fault-tolerant datacenter network for modular datacenters", has been published in Science China Information Sciences, 2012, Vol. 55(7).
To construct mega-datacenters, the MDC first packages thousands of servers, which have been set up and wired, into one shipping container, and then connects the containers together by viewing them as large pluggable building blocks. Once connected to power, a cooling infrastructure, and the Internet, the MDC can provide services at any location in the world.
Traditional datacenter networks (DCN) interconnect large numbers of servers directly, making these difficult to realize and maintain. An MDCN consists of intra- and inter-container networks, which largely simplify the design and implementation. Typically, a standard 20 or 40 foot shipping container is equipped with 1200-2500 servers, with the number of servers in a container fixed during its lifetime. A container's moderate scale relaxes the restriction on the scalability of DCNs. So, intra-container networks can adopt more complex topologies. Although the Kautz graph achieves near-optimal tradeoff between node degree and diameter, and has better bisection width and bottleneck degree, it is considered to be unsuitable for traditional DCNs, due to the difficulty of not violating the original structure in its incremental deployment. For an MDC, on the other hand, the number of servers in a single container is fixed, and the inner network does not change throughout its lifespan. Thus, SCautz was designed by modeling the Kautz graph.
Containerization lowers the total cost of ownership for cloud providers, and allows operators to manage the MDC using a particular "service-free" model. In other words, a container as a whole is never repaired during its deployment lifespan (typically 3-5 years). As long as the performance of the entire container meets an engineered minimum criterion, there is no continuous component repair. However, server failures not only decrease the MDC's computation and storage capacity, but also destroy the MDCN's structure. For instance, BCube is an excellent network architecture for current MDCs. However, due to its incomplete structure, the throughput for one-to-x traffic patterns drops noticeably, while the ABT (aggregate bottleneck throughput) for all-to-all traffic degrades faster than the computation and storage. Furthermore, switch failures decrease BCube's performance much more significantly, since its ABT deteriorates by more than 50% in the presence of 20% switch failures. The fact that network performance degrades faster than computation or storage capacity is the MDCN's ultimate weakness, since this causes the container's overall performance to decrease below the threshold criterion and end its lifespan prematurely.
Therefore, based on the "scale out" principle, SCautz presents a hybrid structure for dealing with server and switch faults. This comprises a base physical Kautz topology, which is built by interconnecting the servers' NIC ports, and a small number of redundant commercial off-the-shelf (COTS) switches. In SCautz, each switch and a specific number of servers form a "cluster". Since switches are divided into two types according to their identifiers, "clusters" are also divided into two types. If viewed as logical nodes, the two types of "clusters" construct two higher-level logical Kautz structures, respectively, as shown in the figure below. SCautz's hybrid structure has the following advantages.
First, SCautz can run in different modes with switches on or off. Its base topology provides high capacity for processing different types of traffic patterns, and behaves as well as BCube. Moreover SCautz's complete structure is able to improve performance twofold through the use of switches, thereby providing the ability to handle bursts of network flows effectively without lowering the quality of bandwidth-intensive applications.
Second, SCautz improves the fault tolerance of an MDCN by using redundant switches. When a server fails, SCautz finds a peer server in the same cluster to bypass the failed one. Thus, it can maintain the throughput for one-to-x traffic (e.g., one-to-one, one-to-all), and reduce the ABT loss by about a half so that network performance degrades much more slowly than the MDC's computation or storage capacity.
Third, the extra cost of the redundant design is very low. Theoretical analysis shows that a typical SCautz-based container with 1280 servers only needs 160 COTS switches.
The design and implementation of SCautz was a collaborative effort involving many researchers. This research project was partially supported by the National Basic Research Program of China (Grant No. 2011CB302600) and National Natural Science Foundation of China (Grant No. 60903205), amongst others. It is an important breakthrough in modular datacenter network architectures. The researchers suggest that their work needs to be implemented and examined in a larger production datacenter. This work will have significant impact on datacenter construction and cloud computing.
Source: National Key Laboratory of Parallel and Distributed Processing
During a conversation this week with Cray CEO, Peter Ungaro, we learned that the company has managed to extend its reach into the enterprise HPC market quite dramatically--at least in supercomputing business terms. With steady growth into these markets, however, the focus on hardware versus the software side of certain problems for such users is....
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.