Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them

January 9, 2014

Near-Linear Network Optimization Technique Proposed

Tiffany Trader
networking graphic 300x300

A team of researchers from the Massachusetts Institute of Technology have devised a new framework for solving difficult network optimization problems. Mathematicians and computer scientists have long used the maximum flow problem, or “max flow,” to determine the most efficient path between points on a network, but as networks grow ever-more complex, solving the series of equations becomes prohibitively time-consuming. Recently, however, scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed an almost-linear-time algorithm for computing “max flow” that could boost the efficiency of even the largest networks.

The maximum-flow problem represents a network as a graph with a series of nodes and connecting lines – aka vertices and edges. Considering that each edge has a maximum capacity, the goal of “max flow” is to calculate how many units of something can be moved from one end of the network to the other. While traditional versions of “max flow” work well for smaller networks, as networks get exponentially larger, the problem becomes intractable, requiring too much time and overhead.

“There has recently been an explosion in the sizes of graphs being studied,” says one of the lead authors, Jonathan Kelner, an associate professor of applied mathematics at MIT and a member of CSAIL. “For example, if you wanted to route traffic on the Internet, study all the connections on Facebook, or analyze genomic data, you could easily end up with graphs with millions, billions or even trillions of edges.”

The CSAIL researchers developed a new theoretical algorithm that has the potential to dramatically reduce the number of operations needed to solve the max-flow problem. With this near-linear solution, it may be possible to optimize traffic across enormous networks like the Internet or the human genome.

Where previous algorithms treated all the paths within a graph as equals, the new technique pinpoints the routes that create a bottleneck within the network. The team’s algorithm separates each graph into clusters of well-connected nodes, and the paths between them that create bottlenecks.

“Our algorithm figures out which parts of the graph can easily route what they need to, and which parts are the bottlenecks. This allows you to focus on the problem areas and the high-level structure, instead of spending a lot of time making unimportant decisions, which means you can use your time a lot more efficiently,” Kelner says.

The enhancements have resulted in a near-linear algorithm, i.e., the amount of time required to solve a problem being almost directly proportional to size of the network. If the number of nodes expands by a factor of 10, the time to solution would go up by a factor of 10 (or close to it) instead of a factor of 100 or 1,000, which would be experienced under previous techniques.

“This means that it scales essentially as well as you could hope for with the size of the input,” says Kelner.

Besides Kelner, the research team includes Lorenzo Orecchia, an applied mathematics instructor at MIT, as well as graduate students Yin Tat Lee and Aaron Sidford.

The authors will present their paper at the ACM-SIAM Symposium on Discrete Algorithms, which takes place this week in Portland, Oregon. Their work, which won the best paper award at this year’s ACM-SIAM conference, also appears in the ACM-SIAM Symposium on Discrete Algorithms journal.

Share This