Visit additional Tabor Communication Publications
February 20, 2013
KAWASAKI, Japan, and SUNNYVALE, Calif., Feb. 20 – Fujitsu Laboratories Limited and Fujitsu Laboratories of America, Inc. today announced the development of transmitter circuits, equalizer circuits for transmission loss, and receiver circuits capable of communicating at 32 Gbps, the world's fastest speed to date. These developments will support inter-processor communications in the next-generation of servers.
Along with increases in CPU performance in recent years, the data processing capabilities of servers have improved greatly, leading to a need for faster data communications between chips and between circuit boards. Through the development of a new kind of transceiver circuit, along with an equalizer circuit that can compensate for signal degradation in transmission lines, Fujitsu Laboratories has made it possible to roughly double data communications speed between CPUs.
These new technologies are expected to lead to improved performance in the next-generation of servers and supercomputers.
Details of the new technologies will be presented at the IEEE International Solid-State Circuits Conference 2013 (ISSCC 2013), beginning Sunday, February 17, 2013 in San Francisco (ISSCC presentations 2.7, 2.1, and 2.5).
In recent years, there has been a greater need for improvements in data processing performance for servers employed in datacenters, which support cloud computing and other applications. This has led to enhancements in CPU performance, as well as the development of large-scale systems that connect large numbers of CPUs. As a result, the amount of data traffic exchanged between CPUs and peripheral devices has grown substantially. To accommodate this high volume of traffic, inter-processor data communications speeds in today's servers have increased from a few Gbps to tens of Gbps. In anticipation for the next generation of high-performance servers, expectations are growing for communication speeds to increase even further.
Increasing the speed of inter-processor data communications requires that both transmitter circuits and receiver circuits operate at higher speeds. Moreover, signal degradation over transmission lines, such as electrical wiring on printed circuit boards, becomes more significant at higher speeds. Therefore, when operating at higher speeds, equalizer circuits necessary to compensate for this transmission loss also require performance improvements.
Newly Developed Technology
The inter-processor data communication unit is broadly divided into a transmitter unit and a receiver unit. The latter consists of 1) an equalizer circuit that compensates for signal degradation over transmission lines and 2) a receiver circuit that reads the original data from the restored signals (Figure 1). By employing new kinds of technologies in the transmitter circuits as well as in the equalizer and receiver circuits within the receiver unit, Fujitsu Laboratories has succeeded in improving communication speeds.
Figure 1: Schematic of high-speed transmitter and receiver units for inter-processor communication
1. Transmitter circuit (ISSCC presentation 2.7)
Transmitter circuits transmit data from multiple channels that have been multiplexed into a single channel. The final-stage multiplexer not only consumes considerable amount of power, but also will approach the limit of its operating speed as data rates increase. Fujitsu Laboratories has developed a transmitter circuit that eliminates the need for a final-stage multiplex circuit (2-to-1 multiplexer). Rather than using conventional binary values (0, 1) in the transmitted signals, the new circuit uses ternary values (0, 1, 2). This makes it possible to restore the original data on the receiving end using only the existing receiver circuit functionality, without having to add any special circuitry (Figure 2, left). As a result, it exceeds the speed limit of conventional transmitter units. This is also why power consumption can be reduced by roughly 30% compared to the existing technology (Figure 2, right).
Figure 2: Schematic of transmitter circuit and breakdown of power consumption
2. Equalizer circuit in the receiver unit for transmission loss (ISSCC presentation 2.1)
The quality of the signal output from the transmitter unit degrades as it is carried across printed circuit boards and other transmission lines. The scale of degradation is greatly magnified depending on the length of the transmission lines and the speed at which the signal travels. Accordingly, signal loss increases as speeds accelerate, even over the same transmission line. Conventionally, applying loss-compensation to the signal attenuation produced at high frequencies results in flat frequency response, thereby compensating for distortion. But as the signal band for high-speed transmission extends even farther into the high frequency range, the drop-off in low-end frequency response, which previously was not a problem, makes it impossible to adequately correct the distortion. Fujitsu Laboratories has developed a circuit that compensates for signal loss by flattening the frequency response at low frequencies. This technology has made it possible to carry a signal at 32 Gbps over a transmission distance of 80 cm, which was previously not possible (Figure 3).
Figure 3: Frequency characteristics of equalizer circuit
3. Receiver circuit in the receiver unit (ISSCC presentation 2.5)
The receiver circuit reads the original data from the signal that has been reshaped by the equalizer circuit for transmission loss. When this occurs, it is necessary to synchronize the speed (frequency) and timing (phase) of the signal, perform sampling on the signal, and determine the original digital values. Conventionally timing errors, which occur when reading data, would be detected from the source data by a timing error detection unit, and they would then be processed through resynchronization via a timing modulation circuit (Figure 4, upper left). But at higher signal speeds, this method requires the time precision that controls the clock to reach a level of precision at the limit of existing technologies. Instead of synchronizing the clock, Fujitsu Laboratories has developed a data interpolation method in which data is periodically sampled and voltage interpolation processing is applied, based on two actually sampled signals, to synthesize a virtual signal that is synchronized to the clock (Figure 4, lower left & right). This technology obviates the need for a timing modulation circuit requiring high resolution in the time axis, making it amenable to further speed increases in the future.
Figure 4: Principles of the data interpolation method
These technologies are expected to significantly contribute to performance improvements in the next-generation of servers and supercomputers.
Fujitsu Laboratories will work to apply these technologies to product areas related to big data, such as the backplane interfaces that link the boards together in the servers.
Fujitsu is the leading Japanese information and communication technology (ICT) company offering a full range of technology products, solutions and services. Over 170,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited reported consolidated revenues of 4.5 trillion yen (US$54 billion) for the fiscal year ended March 31, 2012.
About Fujitsu Laboratories
Founded in 1968 as a wholly owned subsidiary of Fujitsu Limited, Fujitsu Laboratories Limited is one of the premier research centers in the world. With a global network of laboratories in Japan, China, the United States and Europe, the organization conducts a wide range of basic and applied research in the areas of Next-generation Services, Computer Servers, Networks, Electronic Devices and Advanced Materials.
About Fujitsu Laboratories of America
Fujitsu Laboratories of America, Inc. is a wholly owned subsidiary of Fujitsu Laboratories Ltd. (Japan), focusing on research on Internet, interconnect technologies, software development and solutions for several industry verticals. Conducting research in an open environment, it contributes to the global research community and the IT industry. It is headquartered in Sunnyvale, Calif.
Source: Fujitsu Laboratories
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.