Visit additional Tabor Communication Publications
November 14, 2012
SALT LAKE CITY, COSTA MESA and SAN MATEO, Calif., Nov. 14 – Emulex Corporation and uCIRRUS Corporation today announced a joint solution that will provide organizations with the ability to analyze and act on live data while business opportunities are the most relevant and valuable. The solution is an integration of uCIRRUS's XPRESSmp, a live big data in-the-network management platform, and Emulex OneConnect 10Gb Ethernet (10GbE) Network Xceleration (NX) OCe12000-D adapters, with FastStack Sniffer10G software, powered by Myricom. The combination provides 100 percent lossless packet capture on multiple 10GbE links concurrently, enabling up to 100Gb per second (100Gbps) ingest, analysis and action.
In order to effectively act on opportunities surfaced by big data, changes in customer or user behavior, or anomalies in network performance, service providers and government agencies must be able to identify and analyze those opportunities the instant they happen. Traditionally, analysis of this behavior includes the capture, storage, and then analysis of large volumes of data, delaying when intelligence can be gained from the analysis and diminishing the opportunity to influence the behavior. The Emulex and uCIRRUS solution addresses this need for real-time analysis, by capturing, analyzing and actioning network data at line speed. By placing the data management platform into the network or at the edge, the solution can process data as it is generated, before it is stored. This "live data" analysis enables decision results to be interpreted and acted on faster than with traditional big data analytics solutions.
"The combination of Emulex FastStack Sniffer10G with uCIRRUS XPRESSmp changes the big data game by providing the ability to react to changes in customer behavior in real-time. This solution comes at a time when big data is quickly proliferating, particularly in the technology, scientific and business communities," said Shaun Walsh, senior vice president of marketing and corporate development, Emulex. "This joint solution is a big step forward in the evolution of big data going from a historical analysis tool to a resource that can immediately impact daily operations and the bottom line."
The solution introduces a three-layered, fully integrated approach to real-time big data analysis and action. The first layer includes Emulex NX adapters that enable lossless 10Gb ingestion scalable across multiple adapters. The second layer is the Emulex FastStack Sniffer10G network acceleration software which provides highly parallelized, lossless capture. And the third layer leverages uCIRRUS XPRESSmp at the processing level, offering intelligent processing for actionable analytics at the speed of the network, up to 100Gbps. The solution does not require a proprietary system purchase, and runs on industry standard servers for Windows and Linux, providing standards-compliant, fully scalable SQL processing, making it easily accessible and readily integrated into existing enterprise networks.
"Live data management can transform any enterprise from reactive to proactive," said Peter Richards, CEO of uCIRRUS. "uCIRRUS and Emulex are collaborating to put real-time, actionable intelligence into the hands of organizations so they can better serve their constituents and predict trouble before it even appears and provide action in real-time while the opportunity is still open. For example, customers are using our solution for real-time quality of service (QoS) detection and remediation, and instantaneous fraud prediction, identification, and interception."
Emulex will showcase this solution at Supercomputing this week, November 12–15, in Salt Lake City, in booth #632.
Emulex, the leader in converged networking solutions, provides enterprise-class connectivity for servers, networks and storage devices within the data center. The company's product portfolio of Fibre Channel Host Bus Adapters, 10Gb Ethernet Network Interface Cards, Ethernet-based Converged Network Adapters, controllers, embedded bridges and switches, and connectivity management software are proven, tested and trusted by the world's largest and most demanding IT environments. Emulex solutions are used and offered by the industry's leading server and storage OEMs including, Cisco, Dell, EMC, Fujitsu, Hitachi, Hitachi Data Systems, HP, Huawei, IBM, NEC, NetApp and Oracle. Emulex is headquartered in Costa Mesa, Calif. and has offices and research facilities in North America, Asia and Europe. More information about Emulex (NYSE:ELX) is available at www.Emulex.com.
uCIRRUS takes data management to the network and the network edge, making Big Data live and actionable. Its software data management platform, XPRESSmp, ingests data as it is generated at the source, up to 100Gbps, and conducts simultaneous analysis and processing to enable businesses to take action while opportunities are the most relevant. Live data management and action can transform your enterprise from a reactive to a proactive, predictive one. Integrating this live data analysis with historical data sources provides context that enables patterns, anomalies and insights to emerge that can deliver answers while the question and opportunity are still relevant. Enterprises can identify, predict – and proactively intercept – fraud, security issues, and customer churn, for example. The extreme efficiency of XPRESSmp's parallel architecture and scalability enable you to convert more of your data to measurable assets, using your existing IT infrastructure and commodity hardware.
Source: Emulex; uCIRRUS
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.