SEE THE LIGHT: GIGABIT EHTERNET OVER COPPER

December 1, 2000

SCIENCE & ENGINEERING NEWS

San Diego, CALIF. — Logan G. Harbaugh reports that they said it couldn’t be done – running Gigabit Ethernet over standard Category 5 twisted-pair cables. Well, it has been done, and many people are excited about the possibilities of running gigabit speeds over their existing wiring. The question at hand is, at what point is it appropriate? The answer seems to be, in the server room, or when the existing cabling plant can support it.

The exciting thing about gigabit over copper is the price. Both switch ports and network interface cards (NIC) are less expensive, because they don’t need the expensive optical transceiver to convert electrical signals to optical and back. If existing wiring can be used, that also may offer substantial cost savings. However, as with many other items in the IT industry, acquisition cost does not tell the whole story. Nevertheless, interest in Gigabit Ethernet is picking up. Market watcher Dell’Oro Group ( http://www.delloro.com ) estimates that the Gigabit Ethernet segment will nearly double by this time next year, reaching about $4 billion. Not exactly chump changeand that’s only for the switches themselves.

The disappointing thing is that getting actual gigabit speeds may require re-termination, if not new cabling. Network administrators also may find they need to upgrade patch panels and wall jacks, not to mention adding new patch cords, before they can get full speeds. The good news here is that it will take an infrastructure expert (like you) to make the call.

Buying a server based on price is not the best way to goit’s a crucial part of the network and any cost savings can be quickly negated by higher administrative costs, down time, etc. Likewise, no one would buy a server and not put a good UPS on it (we hope). Anyone looking to run lots of Gigabit Ethernet should be looking at costs from the same perspective.

Running gigabit to the desktop for a normal LAN environment would be pretty silly. Even if costs were not a couple of orders of magnitude higher than for switched 100Mbps ports, the stringent requirements for the wiring would increase costs, even if just to test the wiring to ensure it’ll handle the bandwidth. When you add the fact that a normal desktop running Windows 98 will never see even 500Mbps from a gigabit adapter, the whole idea doesn’t make much sense.

So where would gigabit over copper make sense? There are several scenarios. Connecting servers to the backbone, where all wiring is contained within the server room, is one possible application, as is using the technology for high-speed uplinks to workgroup switches or remote wiring closets. Another application would be to connect specialized workstations that need higher speeds and are running operating systems that can get the higher speed out of the adapter.

Connecting servers to the backbone using gigabit over copper makes a lot of sensein a server room, Cat 5 cabling is much more durable than fiber, especially in situations where the cabling may be run temporarily across the floorstep on fiber once, and you can kill the connection. With the number of servers once again increasing dramatically as companies implement server farms for load balancing, the cost of ports also can be a great factor. Also, because the connections often will be via patch cord from the switch directly to the server, without patch panels or wall sockets, getting a high-quality connection is relatively simple and inexpensive.

On the other hand, high-speed uplinks to local workgroup switches, or links from a main server room to local wiring closets, are not an ideal application for gigabit over copper. These connections are the most likely to have high utilization, and for long periods of time. Further, the number of these connections is not likely to be large enough that a big cost savings will be realized from using copper instead of fiber. You’ll also need to consider the future. The development of 10 Gigabit Ethernet ( http://www. smartpartnermag.com/stories/issue/0,4537,2541381,00.html ) is moving right along, and it’s extremely unlikely that a 10-gigabit standard will be produced for copper.

Specialized workstations may be processing graphics, manipulating large data sets, or performing other special mathematical modeling. These systems truly need the higher bandwidth of gigabit and should have operating systems that can take advantage of it. Ifand it’s a big ifthe cabling plant is already well-suited to gigabit over copper, and runs aren’t too long, copper may be the way to go here.

However, if runs are near the limit of 100 meters, or if cabling needs to be redone to pass Cat 5 standards from end-to-end, it may turn out to be cheaper to install fiber. The real issue is that because these workstations are running critical bandwidth-intensive applications, does it make sense to save in the short run on up-front costs, but not deliver the capability promised?

Terminating the wall jack and the patch panel properly is critical for the end-to-end signal quality necessary to fully support gigabit over copper. This process often is skimped on because doing it right is physically difficult and time-consuming, so network administrators may find that their cable plant cannot pass Cat 5 certification, even though all of its parts are Cat 5 certified.

New fiber connector technologiessuch as 3M’s Volition and the MT-RJ standardmake the termination of fiber not only much easier than with the old ST and SC connectors, but also arguably easier than proper termination of Cat 5e copper cabling.

If running new cabling is relatively straightforward, the costs of installing fiber may not be much higher than redoing the copper cabling plant. If the plant has not yet been installed, or if the installed twisted-pair cabling cannot be upgraded to Cat 5 (for instance, if the wiring is the older Cat 3 standard), fiber may be the way to go.

The benefits include more than a cabling plant that easily can handle gigabit, because fiber is immune to electromagnetic interference (EMI). Fiber technology also provides the added benefit of having an upgrade path toyep, you guessed it10 Gigabit Ethernet.

Alternatives There is yet another option for workstations that need high bandwidth. This choice can provide a solution even with operating systems that may only get 150Mbps to 250Mbps of throughput with Gigabit Ethernet. If there are multiple Cat 5 jacks in each location, or if wiring has not yet been installed, multiport 10/100 adapters may be the solution. They provide two or four 10/100 ports on a single NIC, which can be concatenated together to provide a single, 400Mbps connection.

That can provide higher throughput than a gigabit NIC, and at substantially lower system overhead, as well. The total cost also may be less, even counting the multiple cables necessary, because switched 10/100 ports are much less expensive than gigabit ports.

While that action may solve immediate needs, it’s becoming clearer and clearer that to go a lot faster, almost every installation will need to move to a fiber-based environment. While your clients can “do it now” or “do it later,” there’s an opportunity for you at this point in time to become an expert in this new wiring infrastructure paradigm.

The Gigabit Ethernet Alliance recommends that potential users of 1000Base-T test their Cat 5 cabling for return loss and Equal Level Far-End Crosstalk (ELFEXT). Return loss is a measure of reflected energy from impedances in the cabling, while ELFEXT measures the noise from signal leakage at the receiving end of a connection. The alliance warns that these two dynamics may have little effect at the 10Base-T threshold of operation, but that higher speedsboth 100Base-T and 1000Base-Tare prone to performance degradation.

Test suites are available from a number of vendors, including Datacom/Textron, Fluke Corp., HP, Microtest and Wavetek Wandel Goltermann. If, after testing, IT managers find that the Cat 5 links fail, the alliance recommends three kinds of corrective measures: switching to high-performance patch cables; reducing the number of connectors in the link; and reconfiguring some or all of the connectors. Fun!

What is Category 5 cable? It’s not necessarily cabling of a predefined sizeit’s cabling that passes a set of tests to verify minimum data-transmission standards. Cat 5 is based on the ANSI/EIA/TIA 568 Commercial Building Telecommunications Wiring Standard developed by the Electronics Industries Association as requested by the Computer Communications Industry Association in 1985. Category 5 enhanced, (Cat 5e) Cat 6 and Cat 7 also are based on their ability to pass data. While Cat 5e cable is an enhanced version of Cat 5 that can be used anywhere Cat 5 can be used, the same cannot be said of Cat 6 and Cat 7.

It’s unfortunate that there is no standard for connectors, cabling or anything else that ensures interoperability between one Cat 6 cable and another Cat 6 jack. Currently, a number of vendors are working on, or trying to promote, Cat 6 and Cat 7 standards, but there is no accepted industry standard. Sounds to us like the various types of Unix.

As with the various categories of twisted-pair cabling, there are a number of different fiber standards, although in the case of fiber, the only difference is the connectors. The actual fiber is the same for all. The most prevalent connectors are the ST and SC connectors, which comprise most of the existing switch and NIC interfaces. 3M’s Volition, MT-RJ, Lucent’s LC and Panduit’s Opti-Jack are all connectors designed to improve the process of terminating fiber connections.

As an example, a standard ST connector may take an experienced technician between five and 10 minutes to terminate. A Volition connector can be terminated in a minute or so by even an inexperienced staffer, and the quality of the connection is less subject to the ability of the technician, as well.

============================================================

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Mira Supercomputer Enables Cancer Research Breakthrough

November 11, 2019

Dynamic partial-wave spectroscopic (PWS) microscopy allows researchers to observe intracellular structures as small as 20 nanometers – smaller than those visible by optical microscopes – in three dimensions at a mill Read more…

By Staff report

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quantum annealing) – ion trap technology is edging into the QC Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researcher Read more…

By Jan Rowell

What’s New in HPC Research: Cosmic Magnetism, Cryptanalysis, Car Navigation & More

November 8, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Machine Learning Fuels a Booming HPC Market

November 7, 2019

Enterprise infrastructure investments for training machine learning models have grown more than 50 percent annually over the past two years, and are expected to shortly surpass $10 billion, according to a new market fore Read more…

By George Leopold

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Atom by Atom, Supercomputers Shed Light on Alloys

November 7, 2019

Alloys are at the heart of human civilization, but developing alloys in the Information Age is much different than it was in the Bronze Age. Trial-by-error smelting has given way to the use of high-performance computing Read more…

By Oliver Peckham

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This