Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
June 9, 2008

Microsoft Looks at In-Memory Caching with ‘Velocity’ Project

Derrick Harris

Although it likely will be some time before we see a product announcement, and although its existence was just made public at last week TechEd conference (and subsequently buried in this announcement highlighting Bill Gates’ keynote), Microsoft’s “Velocity” project is a big deal.

There is no denying the world’s largest and still most dominant software vendor has taken some hits courtesy of the software-as-a-service and cloud computing trends. Although it has responded somewhat admirably with its various Microsoft Online Services, Office Live and Live Mesh products, those initiatives only viewed the on-demand world from the end-user’s perspective. As I have noted before, software (and even hardware) vendors must not forget that some users cannot, or simply will not, access their applications as Web services: (1) companies/organizations that view their applications as too mission-critical and their data as too sensitive to send across the public Internet; and (2) the companies that actually create and manage the applications that make Web 2.0 what it is.

Assuming such applications are written in .NET, Velocity should get Microsoft markedly closer to IBM and Oracle, and even to smaller providers like GigaSpaces and ScaleOut Software — all of whom support .NET (and all of whom we have featured within the past year) — in terms of being able to cache application data in-memory. This, of course, is profoundly important to anyone running a transaction-heavy Web application, or, really, any application where latency times have a direct impact on user experience or performance (I’m looking at you, financial services). The ability to have data availability scale right along with performance is no insignificant capability, either. Of course, it’s still too early to make any proclamations or get any user accounts about Velocity, but considering the sea of .NET applications already out there – and the countless more to come — Microsoft looks to finally be serious about being part of the high-performance-application world. It will be interesting to see how Velocity interacts with Microsoft’s Hyper-V virtualization technology and Windows HPC Server 2008.

Whatever emerges from Velocity also should be good news to Microsoft’s technology partners — in particular Digipede, which has been delivering distributed computing to .NET apps and now might get the add-on technology it needs to compete with the big boys. Digipede has received no shortage of praise from customers and commentators alike about its relatively inexpensive and very user-friendly solution, but one of the drawbacks has been its limitation in terms of what types of jobs the Digipede Network can handle, namely CPU-intensive jobs benefitting from parallel processing. If Microsoft and Digipede can make Velocity and the Digipede Network function as a unit and keep the price down, Digipede could find itself selling to a whole new, real-time-data-loving audience. That this integration will occur is pure speculation on my part, but it seems to make sense on the surface.

Getting back to the cloud computing from the user point of view (and switching from memory concerns to CPU concerns), I want to draw your attention to Amazon’s recent announcement on its Amazon Web Services site EC2 users can now take advantage of high-CPU instances. Whereas traditional EC2 instances place a premium on memory (an extra-large instance, which costs $.80 per instance hour, provides 15GB RAM and 8 EC2 Compute Units), a high-CPU extra-large instance, for the same price, offers up 7GB of RAM and 20 EC2 Compute Units. Amazon claims this was a popular feature request, and says the new instances are ideally suited for rendering, search indexing or computational analysis applications. There are few, if any, on-demand utility computing services that have the cachet of EC2, so this could be a real test for how popular the on-demand delivery model can be among the more traditional HPC crowd. Something tells me the near-bare-metal access will be much appreciated for these types of applications.

The current poster boy for running CPU-intensive jobs online is Sun Microsystems with its And while the service hasn’t exactly been overwhelmed with users thus far, it appears to be making a name for itself in the rendering space. Announced earlier in the year as “Peach,” the first 3-D movie produced using resources finally has been released under its official title of “Big Buck Bunny.” Sun still isn’t ready to declare rendering — specifically with the Blender application offered in’s application catalog — the service’s killer app, but considering the potentially prohibitive cost of installing an in-house grid, smaller animation studios owe it to themselves to give Sun a look. I watched a few minutes of the movie — part of an open source project designed to “stimulate development of open source 3-D software” and to prove that running such software in an on-demand environment can reduce costs — and the quality impressed my untrained eye. You can read all about the production process and the partnership here, including some of problems encountered along the way, and why Sun will have to up its resource levels in order to tackle the Blender Foundation’s next project.

Finally, I need to at least point out this week’s other feature article, which gives an in-depth look at HP’s new two-in-one server blade. Part of HP’s new scalable computing initiative, the first-of-its-kind piece of hardware is designed to reduce density and power bills in Web-scale datacenters. As with IBM and Dell before it, HP is targeting the Web 2.0 and cloud computing markets with this solution, but its early users are reaping the rewards in more traditional manners.

Other items of interest this week might include: “Credit Suisse Launches Virtualization Management Company“; “Travel Agency Sees 100 Percent Availability w/ Gridscale“; “rPath Enables Cloud Computing for DoE, CERN“; “Voltaire Lowers Latency w/ Messaging Accelerator“; “Panorama Announces Analytics as a Service“; and “Liquid Computing, NetApp Combine Computing, Storage, Networking.”


Comments about GRIDtoday are welcomed and encouraged. Write to me, Derrick Harris, at