Rapid, Granular, Global Network Management

By Dennis Barker

November 10, 2008

On his quest to find the golden city of Timbuktu, explorer Alexander Gordon Laing would dispatch a trusty messenger to carry updates back to his sponsors in Tripoli. Laing wouldn’t know for months if the messenger made it, if he ever knew at all. That was a typical problem with 19th-century information “technology.”

Today, people who live and die by network and application performance worry about information lag, too — and for good reason. According to a survey by Infonetics Research, outages and slow-downs cause midsize companies to experience about 140 hours of downtime every year, to the tune of about $800,000 annually. Focusing on the source of application outages could save many organizations “a significant amount of money,” Infonetics analysts conclude.

There is no lack of vendors offering ways to monitor network performance, but SevOne does a few things that might make it stand out. The company launched in 2005 to develop network- and application-performance technology that would be real-time responsive, but inexpensive and easy to use compared to legacy products. The technology is targeted at people who need to see exactly what’s going on and when, way out across their network empire, remote outposts and all.

“We developed a distributed, peer-to-peer architecture for network management that allows our customers to capture the data they need to respond to specific performance issues,” says SevOne CEO Michael Phelan. “Our design criteria were to make something very scalable, that could accommodate change very quickly, and was much faster than other tools. Being able to scale to cover every application, every device, in a cost-effective and meaningful way are a few of the things that we think make us unique.”

Its software and two types of appliances embody the company’s technology. The Performance Appliance Solution (PAS) is the standard device, incorporating SNMP and NetFlow monitoring, alerting, and sub-minute polling. There are five PAS models, capable of watching from 5,000 to 65,000 network elements. The Dedicated Network Flow Collector (DNC) appliance exists specifically for large NetFlow-based deployments. SevOne’s hardware/software package installs in a datacenter “as near to end-users as possible to collect traffic data and allow operators to immediately pinpoint problems anywhere and avoid developing slowdowns,” Phelan says.

“Each one of our appliances is both a data collector and a reporter,” he explains. “They can work stand-alone, or be joined with others using our P2P architecture. Essentially, they operate as a distributed environment, processing volumes of data that gets pushed out. You request a report or an operation. One appliance could be in Boston, one in New York, one in Chicago, and they all work together. It’s a lot like a grid computer. All the systems share a database that lets them know which peer has the information needed to originate a report. You can log into any of the peered appliances to get a report on any indicator.” SevOne supports all the usual application and network monitoring standards. If, for example, a spike is identified using SNMP, an analyst can drill down further using NetFlow.

SevOne set out to design a system that can scale not just in terms of network size but that also can expand to handle new devices and new applications quickly. “Our company founders have a background in the banking industry, where consolidation has resulted in organizations having two of everything, lots of legacy tools, management tools from all kinds of vendors,” says Phelan. “They’re asked to handle different types of applications, like video and VoIP, and different types of devices. There’s frustration trying to keep pace with these changes.”

As new routers, switches, access points and so on are added to a network, PAS can either discover them automatically or they can be added using an API. “In a health care situation, you might add a new scanner to the network,” Phelan says. “We can have it logged in and be monitoring its performance sometimes in minutes, perhaps a couple days at most.”

In an increasingly on-demand real-time world, though, it’s not only what you can monitor, but how often. “A lot of bad things can happen in under a minute,” Phelan says. “Our technology can monitor the most critical components, including your server CPUs all the way to up-links, at whatever frequency your … service level agreements require. If performance requirements aren’t going to be met, our appliance can issue an alert. Critical links need to be monitored at sub-minute frequencies, and our system will let you do that. Our customers in financial trading need to monitor things down to the second.”

He continues, touting the distributed nature of the solution: “The system has to be able to react. Bad things happen quickly. You can’t wait for reports to generate. With our appliances stationed across the network, we’re using processors across that distributed grid to create large reports that have millions of indicators in seconds. People think we’re using a canned graphic when we produce a graph that charts utilization over 24 hours and it takes a tenth of a second. We have all these cores working together to collect data and generate a report.”

SevOne’s system also yields a more accurate audit trail, providing historical data that lets customers demonstrate performance transparency, “to show then they had bandwidth, they had the switches in place, down to the individual device,” Phelan says. “Legacy solutions average out or roll up the old data. Days might show up as a single data point. A five-second spike would be completely flattened out using those tools. With our reporting capabilities, you can pinpoint any time to the level of granularity you need, which is important because a five-second spike can cause a serious disruption to a financial transaction or a VoIP application.” Each SevOne appliance can store about a year’s worth of detailed data, the company says.

One SevOne user, Aramark, suffered a spike that slowed the order processing system down to 5 percent of its usual speed. “The IT director was able to look at the screen, see the SNMP spike, take the cursor, highlight the spike, and with a few clicks determined that all the traffic was related to people going online for election coverage,” Phelan says. “We provide that overall visibility from one screen, one pane of glass. You don’t have to open another application, and can resolve down to individual IP addresses to see who is using what infrastructure.”

The Cable’s Out?!?!!!

There might be no provider as familiar with angry customer service calls as the cable company. To keep those calls to a minimum, broadband operator Comcast looked for a tool giving it better insight into all the components in its infrastructure. Comcast’s IT managers also wanted a system that could provide that insight on a continual basis, not just intermittently. Comcast chose SevOne because its tools gave Comcast the granularity and the rapid data retrieval needed to make sure service levels remain intact, Phelan says.

“One of the biggest problems our customers are facing is they need real-time monitoring,” says Vess Bakalov, SevOne’s chief technology officer (and former network architect at BankOne). “We use high-speed algorithms in our system to provide real-time performance management. Using our analysis tools and reports, Comcast is able to have the kind of performance transparency they need.”

Comcast’s Jeff Gill, senior director for network surveillance, confirms that SevOne’s system delivers the data his team needs in “two to five seconds,” whereas Comcast’s legacy system “would take three to four hours.”

Using SevOne’s application, “we can literally bring up a device in a particular region, state, or area of the network and get an all-around status of the device, from performance trends, history, current alerts and anything else happening that has some significance to the problem,” Gill says.

“One reason we displaced their incumbent solution,” Bakalov says, “is our scalability. We’re managing more than 10 billion nodes for Comcast. Our P2P architecture is unique. Some competitors might match us in terms of functionality, but we focus on the entire enterprise, and we don’t think anyone can match us in terms of volume.”

Other customers include NYU, SUNY Stony Brook, HBO, Cincinnati Bell, JP Morgan Chase, Thomson Financial and Credit Suisse.

SevOne says it competes with the likes of Concord/CA, NetIQ, InfoVista, HP and IBM, but Phelan says none of them provide the combination of real-time monitoring, speed of reporting, flexibility of polling frequency, easy interface, or scalability that its technology and appliance offer.

“The very distributed architecture, and the ability to deliver second-by-second performance views at a very low price point” distinguish SevOne, says Richard L. Ptak, analyst with Ptak, Noel & Associates. “They are very cost effective with near-real-time reporting. There are relatively few appliance-based management solutions in this space.” Ptak says SevOne is “definitely an emerging company with a product that addresses a compelling pain point.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

ASC18: Tough Applications & Tough Luck

May 17, 2018

The applications at the ASC18 Student Cluster Competition were tough. Tougher than the $3.99 steak special at your local greasy spoon restaurant. The apps are so tough that even Chuck Norris backs away from them slowly. Read more…

By Dan Olds

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and the technology challenges ahead. These discussions happened in Read more…

By Alex R. Larzelere

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Mastering the Big Data Challenge in Cognitive Healthcare

Patrick Chain, genomics researcher at Los Alamos National Laboratory, posed a question in a recent blog: What if a nurse could swipe a patient’s saliva and run a quick genetic test to determine if the patient’s sore throat was caused by a cold virus or a bacterial infection? Read more…

Quantum Network Hub Opens in Japan

May 17, 2018

Following on the launch of its Q Commercial quantum network last December with 12 industrial and academic partners, the official Japanese hub at Keio University is now open to facilitate the exploration of quantum applications important to science and business. The news comes a week after IBM announced that North Carolina State University was the first U.S. university to join its Q Network. Read more…

By Tiffany Trader

ASC18: Final Results Revealed & Wrapped Up

May 17, 2018

It was an exciting week at ASC18 in Nanyang, China. The student teams braved extreme heat, extremely difficult applications, and extreme competition in order to cross the cluster competition finish line. The gala awards ceremony took place on Wednesday. The auditorium was packed with student teams, various dignitaries, the media, and other interested parties. So what happened? Read more…

By Dan Olds

Spring Meetings Underscore Quantum Computing’s Rise

May 17, 2018

The month of April 2018 saw four very important and interesting meetings to discuss the state of quantum computing technologies, their potential impacts, and th Read more…

By Alex R. Larzelere

Quantum Network Hub Opens in Japan

May 17, 2018

Following on the launch of its Q Commercial quantum network last December with 12 industrial and academic partners, the official Japanese hub at Keio University is now open to facilitate the exploration of quantum applications important to science and business. The news comes a week after IBM announced that North Carolina State University was the first U.S. university to join its Q Network. Read more…

By Tiffany Trader

Democratizing HPC: OSC Releases Version 1.3 of OnDemand

May 16, 2018

Making HPC resources readily available and easier to use for scientists who may have less HPC expertise is an ongoing challenge. Open OnDemand is a project by t Read more…

By John Russell

PRACE 2017 Annual Report: Exascale Aspirations; Industry Collaboration; HPC Training

May 15, 2018

The Partnership for Advanced Computing in Europe (PRACE) today released its annual report showcasing 2017 activities and providing a glimpse into thinking about Read more…

By John Russell

US Forms AI Brain Trust

May 11, 2018

Amid calls for a U.S. strategy for promoting AI development, the Trump administration is forming a senior-level panel to help coordinate government and industry research efforts. The Select Committee on Artificial Intelligence was announced Thursday (May 10) during a White House summit organized by the Office of Science and Technology Policy (OSTP). Read more…

By George Leopold

Emerging Advanced Scale Tech Trends Focus of Annual Tabor Conference

May 9, 2018

At Tabor Communications' annual Advanced Scale Forum (ASF) held this week in Austin, the focus was on enterprise adoption of HPC-class technologies and high performance data analytics (HPDA). It’s a confab that brings together end users (CIOs, IT planners, department heads) and vendors and encourages... Read more…

By the Editorial Team

Google I/O 2018: AI Everywhere; TPU 3.0 Delivers 100+ Petaflops but Requires Liquid Cooling

May 9, 2018

All things AI dominated discussion at yesterday’s opening of Google’s I/O 2018 developers meeting covering much of Google's near-term product roadmap. The e Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Leading Solution Providers

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

CFO Steps down in Executive Shuffle at Supermicro

January 31, 2018

Supermicro yesterday announced senior management shuffling including prominent departures, the completion of an audit linked to its delayed Nasdaq filings, and Read more…

By John Russell

Deep Learning Portends ‘Sea Change’ for Oil and Gas Sector

February 1, 2018

The billowing compute and data demands that spurred the oil and gas industry to be the largest commercial users of high-performance computing are now propelling Read more…

By Tiffany Trader

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This