Visit additional Tabor Communication Publications
March 12, 2013
Mountain View, Calif., March 12 — Egnyte, the leader in enterprise file sharing and synchronization, today announced the industry’s first solution supporting integration with third party cloud storage providers. EgnytePlus, which already facilitates secure access to local storage devices, now integrates with Amazon S3, Google Cloud Storage, Microsoft Azure and NetApp Storage GRID.
The promise of the cloud to deliver competitively priced utility-like service has been unrealized when it comes to SaaS solutions. But most recently, cloud storage prices have been slashed by 25 to 30% by Amazon, Google and Microsoft in Q4 2012 alone. This "race to the bottom" is a clear indication of a commodity market, which enterprises must leverage in their quest for optimal data storage solutions.
With Egnyte’s new offering, customers who are contemplating or already operating within a heterogeneous storage environment can take advantage of these new cloud economies. By integrating a third party cloud storage provider, they quickly and cheaply extend their existing data storage infrastructure to the cloud, while still maintaining seamless access for any employee, contractor or client.
"I often hear talk about the cloud as the be all, end all of technology, but it is only part of the solution. Businesses need a combination of choice and control – choice of local storage or type of cloud, and control over what files live where. Since not all files are the same, they cannot be treated in the same way," said Vineet Jain, CEO, Egnyte. "It’s a fact that the overwhelming majority of Fortune 500 companies use multiple on-premise storage vendors and at least one cloud storage provider. Egnyte has the only file sharing solution available to give enterprises the choice and control across their diverse storage environments."
More about Egnyte Plus for Third Party Clouds
Egnyte’s file sharing infrastructure solution is optimized for heterogeneous environments and built on a three-tier platform. Each tier allows Egnyte to optimize how data is handled, whether it’s the Sharing tier, Replication tier or Archive tier. With the addition of third party cloud storage vendors, Egnyte’s sharing tier allows the file structure of data stored in third party clouds to become visible to Egnyte users without fully replicating the data. User permissions are respected, which allows them to enjoy the freedom of mobility while retaining access to all their necessary files. Egnyte’s Cloud is always aware of any transaction, creating a single global namespace for behind the firewall NAS devices and third party cloud storage providers.
Over 1 billion files are shared daily by businesses using Egnyte Hybrid Cloud file server. Egnyte’s unique technology provides the speed and security of local storage with the accessibility of the cloud. Users can easily store, share, access and backup files, while IT has the centralized administration and control to enforce business policies. Egnyte, founded 2007, is based in Mountain View, California and is a privately held company backed by venture capital firms Google Ventures, Kleiner Perkins Caufield & Byers, Floodgate Fund, and Polaris Venture Partners.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.