NASA, Rackspace Open-Sourcing the Cloud

By Nicole Hemsoth

July 19, 2010

This morning NASA and Rackspace announced their partnership on a project called OpenStack, which is based on donated code from NASA’s Nebula cloud platform and Rackspace’s own Cloud Files and Cloud Server public cloud offerings. Although NASA’s contributions to the project won’t be felt until later in the year, the underlying provisioning engine coupled with Rackspace’s offerings will provide a highly flexible alternative to other cloud possibilities — at least once it catches on and hits critical mass. For now, however, OpenStack is relegated to the growing watchlist for potentially paradigm-shifting possibilities on the horizon and speculation is hurtling about today, as one might imagine.

Outside of it capabilities, the story for many in the community is less about jumping on board for immediate production use and more about what it means for the culture of the cloud, namely in the interoperability and proprietary versus open source sense. The official arrival of OpenStack might change the way many think about vendor lock-in fears and cloud standards, while providing some tangible benefits for Rackspace (not to mention cloud adoption overall) in the process, if only in the way of honor.

As it stands now, when it comes to cloud APIs, Amazon’s is quickly on its way to becoming the de facto standard, if it isn’t already. Whether or not the OpenStack news is going to gather enough momentum to shatter that broad opinion remains to be seen, but in the meantime, there’s a lot of work to be done. This is not production-ready code yet and still requires massive support, however with enough of that (and with the help of the 25 and counting corporate supporters who are aligned with the project’s mission to open the cloud. 

Those “corporate sponsors” of OpenStack who have vowed their support appeared on a roster following a workshop last week on the project to help it build the ecosystem of open cloud environments. Among the firms who have publicly announced cooperation are RightScale, Citrix, Intel, AMD, Dell, Opscode, and Cloud.com, but the details about the involvement of any of these companies have been shadowy at best, which does seem a bit odd.

OpenStack will feature several cloud infrastructure components, including a fully distributed object store based on Rackspace’s Cloud Files, something that is available now. However, there is a second phase of the release, which includes a scalable compute-provisioning engine based on technology pioneered by NASA for its Nebula cloud, which will be integrated later this year and once completed will be available under Apache licensing.

The NASA connection certainly goes rather far in establishing the credibility of this open source push from Rackspace and this, coupled with the fact that Rackspace’s offering to OpenStack is mature and time-tested unlike some other open source projects that lack the backing of a proven track record to speak to their success—even if they are being used in production without issues.

For now, the project is not going to change the lives of those in the small to mid-range market by any means. This news is geared toward those who could actually make the most use out of OpenStack as it stands today–large-scale enterprises and institutions. . According to Fabio Torlini, Rackspace EMEA marketing director in an interview, this is “not a code that many small and medium businesses are likely to run until they are more mature. Instead, it’s aimed at providers, institutions, and enterprises with highly technical operations teams that need to turn physical hardware into large-scale cloud deployments.” This also means that from a development standpoint, users will be able to use their experience in a domain to develop applications on an open platform that will be useful in their niche — and be able to migrate these around as needed rather than facing lock-in once they settle on a particular provider. Application portability has been a noted concern among many in HPC and while this might not solve more general data movement issues, it is a step in the right direction from a development standpoint.

The Interoperability Angle

The big news here outside of the open sourcing of its code more generally is the message it sends about interoperability and standards in the cloud. One of the greatest fears, especially for enterprise and scientific users, is that they face major hurdles if they ever hope to leave the cloud they’ve landed upon. Having an open source cloud means that concerns about moving data from one cloud provider to another might be negated, thus alleviating the often-cited fear of “cloud lock-in” which refers to the roach motel business model — where users can check in anytime they’d like but can never leave.

Torlini stated, “The open source model has been proven to promote the standards and interoperability critical to the success of our industry. The explosive growth of the internet can be attributed to open, universal standards like HTTP and HTML. The early cloud offerings have bucked this trend and are largely proprietary. No one benefits from a fractured landscape of closed, incompatible clouds where migration is difficult and true transparency is impossible…it’s critically important for the cloud to be open and many people in the industry share concern about the proprietary nature of the leading cloud platforms.”

Open Clouds for Developers

One of the other items of interest is what this means for application developers and the niche industries they serve. After all, having one stable, open platform to program to creates a much more hospitable environment for those creating the next app to fit their segment, which means that if OpenStack catches on like many predict it will, especially after NASA’s contribution is fully integrated, it will allow for a richness in application development that could only be possible with an open programming paradigm—great news for developers and very good news for the companies who depend on their innovations to remain competitive. In some senses, OpenStack could be the great equalizer of the cloud services industry—which means, of course, there will be victims. The extent to which this will have an impact on the established players is difficult to analyze at this point but as time wears on and the wide range of possible uses of this software become apparent we may see that this alters the landscape for proprietary cloud software significantly.

In an interview with CNET, Mark Collier, Rackspace VP of Business Development noted that “part of the reason this project is open source is that enterprise developers have more specific domain knowledge than service providers might and that open source provides a way for interested users to collaborate to create a better product.” Others commented on the relevance of this story for developers, including Rackspace’s Torlini, who noted that, “Software developers will also be able to program to one stable platform. Openstack will become the cloud platform of choice in the same way that Android has rapidly become the platform of choice for mobile providers”

A Changing Cloud Landscape?

If the cloud starts moving toward the open source approach, it could mean a pole shift for the entire industry. As analyst Steve Hilton noted, “large cloud companies such as HP, Amazon and Oracle were not on the list of participating companies. This could become an issue if someone in the industry builds an open source cloud, there are lots of forces — enterprise control, vendor lock-in, channel partner business models — keeping it from being adopted.”

RightScale’s CEO Thorsten von Eicken blogged about the company’s involvement with OpenStack in non-technical terms, stating “having many fragmented cloud efforts doesn’t really help build a compelling alternative to Amazon, who keeps adding incredible new features at a blazing pace. And the industry needs an alternative to Amazon, not because of some problem with AWS, but because in the long run cloud computing cannot fulfill its promise to revolutionizing the way computing is consumed if there aren’t a multitude of vendors with offerings targeting different use cases, different needs, different budgets, different customer segments, etc.”

From a distance, it’s hard to find any drawbacks to the Rackspace announcement from an eventual user perspective as it has very broad appeal, especially to the growing number of voices concerned about interoperability. Furthermore, it’s open source and to one-up that, it’s well-tested open sources versus a kind of public trial and error process since Rackspace has been divvying out this same software without complaint to a number of big name clients, including several in research and academia, for some time. Secondly, the uses for such capability are nearly limitless and are creating a new playing field for the entire cloud industry — from the massive needs of the HPC community on down to eventually have an impact on small startups.

Whether or not OpenStack becomes the great cloud equalizer or languishes over the course of the next year in development and refinement remains to be seen, but for the goals of interoperability and a truly open cloud, this is certainly compelling news.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia t Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together ab Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Leading Solution Providers

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This