Visit additional Tabor Communication Publications
November 23, 2007
More businesses than ever are employing high performance computing capabilities to fulfill their mission-critical needs. While many of these companies aren't using traditional technical computing, they still require a level of processing power, networking performance or storage scale that necessitates HPC assets. In most cases, the systems are not being used to produce a single answer or model a specific problem, but rather provide a continuous high performance capability for processing real-time transactions. In this type of environment, pure performance is not enough; marrying HPC with mission-critical computing is the real challenge.
Examples of such businesses include Wal-Mart, NASDAQ, and FedEx, three companies that shared their experiences with high performance computing at a Masterworks session at SC07 in Reno last week. The session was organized with the help of the Council on Competitiveness, an NGO that focuses on U.S. economic competitiveness opportunities and challenges.
NASDAQ -- Speed, Cost and Reliability are Key
As executive vice president of Operations and Technology and chief information officer of NASDAQ since 2005, Anna Ewing has witnessed a rapid transformation of financial market exchanges. Although the industry is now extremely high-tech, it's been slow to become globalized in the manner of most other industries. Here in the U.S., and even more so, in other countries, the exchanges have been maintained and protected as near monopolies by their government benefactors. Today though, the globalization of market exchanges is occurring in parallel with the rapid increase in electronic trading volume. In this environment, transaction speed, data throughput and low latency messaging are the technological features that give exchanges their competitive edge.
The most immediate challenge for NASDAQ is to keep up with the message data as electronic exchange traffic continues to skyrocket. Ewing says the exchange use to double its data traffic every year; now it's every six months. The interconnectedness of the global markets is also stressing the system. Thanks to the near instantaneous transfer of market data, disruptive financial events quickly ripple through the world's markets. In this volatile environment, predictability becomes a real asset and users gravitate to those exchanges where they know the trades can be executed reliably.
According to Ewing , their target is Four Nines (99.99 percent) reliability and they've been tracking to Five Nines (99.999 percent). Immediately after 9/11, the NASDAQ systems remained operational, thanks to a virtualized model and computing resources that were distributed across the country. But a lot of their customers were not nearly so fortunate, either because they relied on New York assets or because the redundant systems they had in place had never been tested, and didn't perform as expected. Because of this and the general chaos of the financial environment, NASDAQ ended up voluntarily shutting down the exchange after 9/11. The lesson for NASDAQ was to include their customers in their business continuity planning and testing.
Because of the ubiquity of Internet applications and recent changes to the market regulatory framework, the barriers to automated trading have lowered dramatically. Achieving low latency market data messaging has becomes a critical feature for attracting traders. At NASDAQ, they're constantly looking at ways for improving the messaging infrastructure to shave time off transactions. Ewing says they now can provide less than a 1 ms round-trip per message. In an effort to shave microseconds of latency from trades, some customers are collocating in NASDAQ facilities to get an edge over their competitors coming through the WAN.
"From a technology perspective, speed, reliability and low-cost are the life blood of our market," says Ewing " On any given day, we will process over two billion transactions at sub-millisecond speeds, at rates of over 200,000 transactions per second."
Because of the rapidly increasing volumes of transactions, scaling their computing infrastructure becomes a continuous process, not something to be addressed every three or four years as equipment becomes obsolete. NASDAQ relies almost exclusively on commodity platforms, along with their own customized software. Using this model, over the last several years they've been able to reduce their cost base by 70 percent.
"There's nothing fancy about our platforms," explains Ewing. "It's the software and network engineering that we perform that is, quite frankly, our core competence -- our secret sauce, if you will."
Wal-Mart -- The Challenge of the 410 Billion Row Table
Nancy Stewart, senior vice president and chief technology officer of Wal-Mart Stores Inc., is in charge of the company's infrastructure, operations and technology roadmap. That turns out to be quite a responsibility. Wal-Mart is the largest retailer in the world, a $370 billion company, whose revenue is larger than IBM, Intel, Microsoft, HP and Dell combined. The company is on track to become the first $1 trillion dollar company within the next few years.
Although Wal-Mart does not talk specifics about the scope of the computing and storage infrastructure it administers, in order to manage their inventory and supply chain, the company must process a 410 billion row table to figure out what is going to end up on its world-wide store shelves on any given day. The data has to be massaged very quickly, so that inventory control can react to real time events, like disasters, man-made supply disruptions or seasonal demand spikes. While the stores themselves may close, the company's IT infrastructure is up 24/7.
"The value for us in using high performance computing is related to the fact that we have one of the largest data stores in the world," says Stewart. "In terms of using that data store, in any given two hour period we have to process over two petabytes of data."
Wal-Mart develops about 80 percent of their software in-house to maintain the level of reliability and availability that they require. When your company is netting $2 billion per hour on the day after Thanksgiving, downtime is not really an option. To work with Wal-Mart, suppliers and other partners have to match the retailer's devotion to continuous availability. Because of the magnitude of transactions and the cash flow, Wal-Mart doesn't maintain service level agreements (SLAs) with their computing partners. According to Stewart, none of them could afford the penalties involved with any downtime.
The ongoing problem for Wal-Mart is that their inventory management database has become so large that they've maxed out on their ability to handle it. The company's application represents the "Grand Challenge" of real-time transaction processing. A trillion-row table, which they foresee in the next few years, is going to be difficult to process in real time. What they're really looking for are predictable tools that can scale to their future needs. In truth, Stewart would prefer even faster turnaround on the inventory they currently manage.
"I really need to be able to mine the data much more quickly than I am now" admits Stewart.... "I'm not getting that today."
FedEx -- Logistics Planning on a Grand Scale
Kevin Humphries is the senior vice president of Technology Systems for FedEx Corporate Services and is responsible for setting technology direction as well as providing data center, network and field infrastructure support. The company's computing technology orchestrates the delivery of millions of items each day around the world, using a fleet of over 600 aircraft and 75,000 motorized vehicles.
According to Humphries, the only way they're able pull off this global logistics puzzle is to employ HPC simulation and modeling to help plan the FedEx routes. Trucks and planes have to be continually shuffled from place to place in the most efficient manner possible to make timely deliveries and to optimize resources. It's not just a mega-version of the traveling salesman problem. In addition to the complex routing, the company has to deal with unforeseen events like weather and equipment breakdowns. On top of that, FedEx has essentially no control over shipping demand at any given time. But it's the scope of the problem that precipitates the need for HPC.
"We have to take everything that comes our way," says Humphries. "That creates about 30 million origin-destination pairs that have to be planned 24/7 every hour of the day, over all the assets that we own."
The initial logistics plan for using the assets is performed with traditional HPC cluster tools well in advance of the actual shipments. As the time winds down to the day of execution, the model is continuously refined (some on grid platforms) to support a real time response. The refined model has to react to environmental conditions, like weather, mechanical breakdowns and infrastructure problems. An extremely high capacity computing environment is used to coalesce all the information in real time.
Humphries main frustration with high performance computing technology is its uniqueness. Businesses like FedEx would like to see their HPC assets seamlessly embedded into their overall enterprise infrastructure rather than have to be treated as an island of resources devoted to solving specialized problems. He thinks that transition is occurring, but they still struggle with some of the distinctive aspects of HPC, especially as it pertains to their cluster computing resources. The mainframes of the past were much easier to deal with compared to a system with thousands of nodes, where the job has to split up into little pieces. Further constraining the use of these systems is the limited pool of talent that can manage those resources.
"I don't know where that changes though," says Humphries. "It's not something that every kid is going to learn in college and it's not something everybody is going to learn on the job."
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.