Visit additional Tabor Communication Publications
November 12, 2010
Cloud computing concept meets supercharged open-source network security tools at SC10
REDLANDS, Calif., Nov. 12 -- MetaFlows, Inc., a startup focused on leveraging emerging cloud/virtualization technologies for the next generation of network security solutions, will debut an innovative network security monitoring system as part of the SC10 networking infrastructure called "SCinet". By monitoring SCinet's diverse and high throughput network, MetaFlows aims to demonstrate that its new network security monitoring ("NSM") system, the world's first fully SaaS-based system, is "ready for the big leagues." If successful, it would also signal the realization of a new cost-cutting paradigm shift the network security industry -- and its patrons -- have been waiting for.
Founded upon battle-hardened, open-source resources (Emerging Threats signatures, Cyber-TA's BotHunter dialog-based correlator, Sourcefire's Snort VRT, etc.), MetaFlows' NSM reconciles and ranks IDS, flow, and active (local And global) intelligence through a revolutionary predictive global correlation system based on Google's page ranking algorithm, better revealing true positives while significantly cutting down on false-positive clutter. MetaFlows' NSM then delivers and unifies these results, along with log management, through the world's first fully SaaS-based, real-time security console with easy-to-use forensic tools for deep event analysis. To cap it all off, MetaFlows' Open-Sensor Technology helps NSM subscribers save thousands more dollars per year by granting them the ability to use almost any off-the-shelf sensor hardware they prefer or, via Linux/FreeBSD or virtual machines, use their preexisting hardware.
"At SC10, we expect to show the world that these technologies are now fully matured and able to handle the most demanding of environments," said Livio Ricciulli, founder and chief scientist of MetaFlows. "The HPC community should find our fully SaaS-based security console and predictive global correlation technologies especially interesting, because they afford HPC admins and their MSSPs the levels of secure mobility and efficiency they've always needed but have never seen before."
MetaFlows' NSM will be active throughout SC10, and MetaFlows Chief Scientist, Livio Ricciulli, will be available to answer any questions you might have about it, November 14th through the 19th.
If you are interested in a live demonstration of MetaFlows NSM while at SC10, Livio Ricciulli would be happy to personally demo the system and get your feedback. Simply RSVP with MetaFlows' press contact, Jude Calvillo (email@example.com), to arrange a meeting.
About MetaFlows, Inc.
MetaFlows, Inc. is a California-based corporation currently working to bring the world's first fully SaaS-based IDS management solution to market, a solution so revolutionary in infrastructure and intelligence that it will unavoidably slash the costs and complexity of network security monitoring while actually improving upon event analysis and remediation response time(s). MetaFlows is partially funded by the National Science Foundation and SRI International and is led by a team of experienced entrepreneurs with a track record of success in network security ventures. For more information on MetaFlows, visit www.MetaFlows.com.
SC10, sponsored by the IEEE Computer Society and the ACM (Association for Computing Machinery) offers a complete technical education program and exhibition to showcase the many ways high performance computing, networking, storage and analysis lead to advances in scientific discovery, research, education and commerce. This premier international conference includes a globally attended technical program, workshops, tutorials, a world class exhibit area, demonstrations and opportunities for hands-on learning. For more information on SC10, visit http://sc10.supercomputing.org/.
Source: MetaFlows, Inc.
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.