Visit additional Tabor Communication Publications
September 22, 2008
Aleri announces results of second commissioned STAC performance report
CHICAGO, Sept. 22 -- Aleri Inc., a leading provider of enterprise-class complex event processing (CEP) technology, announced today that a second round of independent test results on the Aleri Streaming Platform have been completed and released by the Securities Technology Analysis Center (STAC). The test provides a benchmark of the new 45 nm Intel Xeon 7400 (Dunnington) vs. the 65 nm Xeon 7300 (Caneland) that was used for the previous round of tests. Both tests were run on the Sun Fire X4450 running the Solaris 10 operating system. Results show performance scales linearly when going from the four-core Xeon 7300 processor to the new six-core Xeon 7400 processor, yielding more than 50 percent throughput gain as well as electrical savings.
"When it comes to consolidating and analyzing market data, throughput and latency are both critical to many trading firms," said Jeff Wootton, VP of product strategy at Aleri. "Aleri's multi-threaded CEP technology lets firms take full advantage of the new six-core chips from Intel by scaling across available CPUs and cores. The performance gains in moving from the 4 core processor to the 6 core processor were even better than we expected."
"Aleri's ability to scale the performance of their application through additional cores holds huge potential for the future. Intel will continue to deliver performance increases through multiple cores. Applications that can extract performance from the core architecture will deliver great value to their customers," states Rick Jacobsen, director of financial service marketing at Intel Americas.
The test case was developed by Aleri and STAC, based on Aleri's experience with trading firms. The project was designed to measure the latency of the Aleri CEP system running on a Sun Fire X4450 server using six-core Intel Xeon processors. The Aleri data model used in the test was the same used in the first test, an order book aggregation model that operated on streaming full-depth order book feeds from US exchanges, which were aggregated to produce a single order book stream as the output.
Aleri is also an active contributor to the STAC-A1 working group within the STAC Benchmark Council, which is currently developing Version 1 of standard benchmark specifications for event-processing platforms. This order-book aggregation test case was developed prior to the creation of those specifications and is not a standard benchmark. Aleri plans to submit this test case to the Council as a proposed specification under Version 2 so that other CEP vendors can produce objectively comparable performance data for order-book use cases.
To review a copy of the "Aleri Streaming Platform on Intel Dunnington and Solaris 10: Order Book Consolidation Test" STAC report go to http://www.stacresearch.com/node/3948. To read more about Aleri's experience with STAC Benchmarking go http://blog.aleri.com/.
Aleri is a leading provider of enterprise-class complex event processing (CEP) technology for financial institutions and beyond. Aleri's CEP Platform was designed from the ground up to provide the most robust architecture available for rapid implementation of mission critical applications within the most demanding environments. Built for high throughput with minimal latency, Aleri's event processing technology allows customers to analyze and respond instantly to high-volume, high-speed data in order to minimize risk and increase competitive advantage. Aleri is the first to develop and deploy commercial enterprise-class applications built on event processing technology, the Aleri Liquidity Management System, which is used by some of the largest global bank treasuries in the world, and the Aleri Market Liquidity Analysis engine, which consolidates and analyzes multiple order book feeds from individual exchanges to provide a powerful tool for trading in fragmented markets. Aleri is a global company headquartered in Chicago with offices in New York, New Jersey, London, and Paris. For more information, visit www.aleri.com.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.