Visit additional Tabor Communication Publications
February 20, 2013
KAWASAKI, Japan, and SUNNYVALE, Calif., Feb. 20 – Fujitsu Laboratories Limited and Fujitsu Laboratories of America, Inc. today announced the development of transmitter circuits, equalizer circuits for transmission loss, and receiver circuits capable of communicating at 32 Gbps, the world's fastest speed to date. These developments will support inter-processor communications in the next-generation of servers.
Along with increases in CPU performance in recent years, the data processing capabilities of servers have improved greatly, leading to a need for faster data communications between chips and between circuit boards. Through the development of a new kind of transceiver circuit, along with an equalizer circuit that can compensate for signal degradation in transmission lines, Fujitsu Laboratories has made it possible to roughly double data communications speed between CPUs.
These new technologies are expected to lead to improved performance in the next-generation of servers and supercomputers.
Details of the new technologies will be presented at the IEEE International Solid-State Circuits Conference 2013 (ISSCC 2013), beginning Sunday, February 17, 2013 in San Francisco (ISSCC presentations 2.7, 2.1, and 2.5).
In recent years, there has been a greater need for improvements in data processing performance for servers employed in datacenters, which support cloud computing and other applications. This has led to enhancements in CPU performance, as well as the development of large-scale systems that connect large numbers of CPUs. As a result, the amount of data traffic exchanged between CPUs and peripheral devices has grown substantially. To accommodate this high volume of traffic, inter-processor data communications speeds in today's servers have increased from a few Gbps to tens of Gbps. In anticipation for the next generation of high-performance servers, expectations are growing for communication speeds to increase even further.
Increasing the speed of inter-processor data communications requires that both transmitter circuits and receiver circuits operate at higher speeds. Moreover, signal degradation over transmission lines, such as electrical wiring on printed circuit boards, becomes more significant at higher speeds. Therefore, when operating at higher speeds, equalizer circuits necessary to compensate for this transmission loss also require performance improvements.
Newly Developed Technology
The inter-processor data communication unit is broadly divided into a transmitter unit and a receiver unit. The latter consists of 1) an equalizer circuit that compensates for signal degradation over transmission lines and 2) a receiver circuit that reads the original data from the restored signals (Figure 1). By employing new kinds of technologies in the transmitter circuits as well as in the equalizer and receiver circuits within the receiver unit, Fujitsu Laboratories has succeeded in improving communication speeds.
Figure 1: Schematic of high-speed transmitter and receiver units for inter-processor communication
1. Transmitter circuit (ISSCC presentation 2.7)
Transmitter circuits transmit data from multiple channels that have been multiplexed into a single channel. The final-stage multiplexer not only consumes considerable amount of power, but also will approach the limit of its operating speed as data rates increase. Fujitsu Laboratories has developed a transmitter circuit that eliminates the need for a final-stage multiplex circuit (2-to-1 multiplexer). Rather than using conventional binary values (0, 1) in the transmitted signals, the new circuit uses ternary values (0, 1, 2). This makes it possible to restore the original data on the receiving end using only the existing receiver circuit functionality, without having to add any special circuitry (Figure 2, left). As a result, it exceeds the speed limit of conventional transmitter units. This is also why power consumption can be reduced by roughly 30% compared to the existing technology (Figure 2, right).
Figure 2: Schematic of transmitter circuit and breakdown of power consumption
2. Equalizer circuit in the receiver unit for transmission loss (ISSCC presentation 2.1)
The quality of the signal output from the transmitter unit degrades as it is carried across printed circuit boards and other transmission lines. The scale of degradation is greatly magnified depending on the length of the transmission lines and the speed at which the signal travels. Accordingly, signal loss increases as speeds accelerate, even over the same transmission line. Conventionally, applying loss-compensation to the signal attenuation produced at high frequencies results in flat frequency response, thereby compensating for distortion. But as the signal band for high-speed transmission extends even farther into the high frequency range, the drop-off in low-end frequency response, which previously was not a problem, makes it impossible to adequately correct the distortion. Fujitsu Laboratories has developed a circuit that compensates for signal loss by flattening the frequency response at low frequencies. This technology has made it possible to carry a signal at 32 Gbps over a transmission distance of 80 cm, which was previously not possible (Figure 3).
Figure 3: Frequency characteristics of equalizer circuit
3. Receiver circuit in the receiver unit (ISSCC presentation 2.5)
The receiver circuit reads the original data from the signal that has been reshaped by the equalizer circuit for transmission loss. When this occurs, it is necessary to synchronize the speed (frequency) and timing (phase) of the signal, perform sampling on the signal, and determine the original digital values. Conventionally timing errors, which occur when reading data, would be detected from the source data by a timing error detection unit, and they would then be processed through resynchronization via a timing modulation circuit (Figure 4, upper left). But at higher signal speeds, this method requires the time precision that controls the clock to reach a level of precision at the limit of existing technologies. Instead of synchronizing the clock, Fujitsu Laboratories has developed a data interpolation method in which data is periodically sampled and voltage interpolation processing is applied, based on two actually sampled signals, to synthesize a virtual signal that is synchronized to the clock (Figure 4, lower left & right). This technology obviates the need for a timing modulation circuit requiring high resolution in the time axis, making it amenable to further speed increases in the future.
Figure 4: Principles of the data interpolation method
These technologies are expected to significantly contribute to performance improvements in the next-generation of servers and supercomputers.
Fujitsu Laboratories will work to apply these technologies to product areas related to big data, such as the backplane interfaces that link the boards together in the servers.
Fujitsu is the leading Japanese information and communication technology (ICT) company offering a full range of technology products, solutions and services. Over 170,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited reported consolidated revenues of 4.5 trillion yen (US$54 billion) for the fiscal year ended March 31, 2012.
About Fujitsu Laboratories
Founded in 1968 as a wholly owned subsidiary of Fujitsu Limited, Fujitsu Laboratories Limited is one of the premier research centers in the world. With a global network of laboratories in Japan, China, the United States and Europe, the organization conducts a wide range of basic and applied research in the areas of Next-generation Services, Computer Servers, Networks, Electronic Devices and Advanced Materials.
About Fujitsu Laboratories of America
Fujitsu Laboratories of America, Inc. is a wholly owned subsidiary of Fujitsu Laboratories Ltd. (Japan), focusing on research on Internet, interconnect technologies, software development and solutions for several industry verticals. Conducting research in an open environment, it contributes to the global research community and the IT industry. It is headquartered in Sunnyvale, Calif.
Source: Fujitsu Laboratories
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.