SC10 Disruptive Technology Preview: The First Cloud Portal to “R” and Beyond

By Nicole Hemsoth

October 26, 2010

At each annual Supercomputing Conference a handful of innovations are selected as the year’s “disruptive technologies” that are most likely to revolutionize high-performance computing. These are described as “drastic innovations in current practices…that have the potential to completely transform” the landscape. 

At this year’s event in New Orleans, the focus will be on “new computing architectures and interfaces that will significantly impact the high-performance computing field throughout the next five to 15 years,” a focus that is reflected in the list of disruptive exhibitors who were selected by an SC committee. 

Another “qualification” of those selected innovations is that they cannot have already emerged into the landscape in any meaningful way—that they sit on the bleeding edge waiting for impetus to burst forth and cause a paradigm shift.

At the edge of this potential sea-change in HPC—and included on that SC10 list of innovations this year is a one-man show run by Karim Chine of his newly-minted company, Cloud Era, Ltd.

Chine’s opportunity to showcase his “Google Docs-like portal for scientific computing in the cloud” could mean that his three-year effort, which he bootstrapped after he was unable to secure the funding needed for his research and development process, could garner some significant interest and make what this self-described “social entrepreneur” calls a real, universal impact in the broad field of large-scale data analysis.

Chine’s goal when he began the project after leaving academia was to bring the R language to the cloud and deliver it seamlessly to users who can share infrastructure and collaborate in real-time with a wide range of documents and computational tools. Or at least that’s the Reader’s Digest version–the actual technology and processes that create the experience for technical users goes far beyond these elements in terms of complexity and what is possible.

From the outset, Chine saw the inherent value of R as a ubiquitous tool but also recognized that there are a number of embedded challenges to using the language in terms of memory and compute capabilities being stretched to the limit. On the other end of the spectrum, he also saw how he could carry over lessons from social networks. Chine notes that part of what makes his Elastic-R project innovative–disruptive, even–is that users can move beyond sharing static information as they would on social networking platform and instead have a scientific network where real-time information sharing would be at the core of the communities.

The R Language Coming to a Browser Near You

It’s far too simple to suggest that what makes the platform unique or disruptive is the capacity for real-time resource and information-sharing. At the core of this innovation is the enhanced ability for researchers to use R, Scilab, and other tools in a new way–on the “infinite” resources provided by the cloud.

Many will agree that the R language is the lingua franca of data analysis—it’s the standard for nearly all statistics students in every major university and has a user base that some estimate is well over one million. In Chine’s view, the beauty of the R language, which is an open source implementation of S, lies “not just in statistics, not just in open source, it’s become the environment where people share scientific artifacts—where people contribute and access powerful tools for working with data.”

Although Chine discussed at length some of the benefits of the R language for scientists and researchers, he noted that there are some significant limitations to the language, particularly in the arena of software architecture and the R’s distinct lack of ability to optimize memory usage. However, the memory and architecture problems can be addressed by delivering R via cloud-based resources like EC2—in an environment where a user is no longer constrained by compute or memory and where inexpensive machine instances with 70 GB of RAM can be called into action in a few moments.

The idea of a “few moments” to get an instance up and running might strike some newer EC2 users as a little far-fetched, which leads to another issue that Elastic-R might be able to solve. One of the goals Chine had in mind was not only to provide a resource that would make R available via a web browser on a machine like an iPad, for instance, which has limited compute capacity, but to deliver the resource in a way that is intuitive and takes away from potential complexity in accessing remote infrastructure.

Elastic-R enables scientists, educators and students to use cloud resources seamlessly, work with R engines and use their full capabilities from within any standard web browser. For example, they can collaborate in real time, create, share and reuse machines, sessions, data functions, spreadsheets, dashboards, etc.”

Elastic-R is also an applications platform that allows anyone to assemble statistical methods and data with interactive user interfaces for the end user. These interfaces and dashboards are created visually and are automatically published and delivered as simple web applications.”

For Chine, the revolutionary or disruptive nature of Elastic-R lies in its user-friendliness, something that few people might say about the static R language. He states that offering a platform on top of R that is easy to work with in any browser allows people to access infrastructure without being computer savvy or with any real specific training. In essence, in three minutes you can have simple access to machines on EC2 that will allow you to do anything you want with large-scale data.

Even more disruptive, however, is the fact that users can hook in other scientific computing tools like Scilab or MATLAB thus making it a universal platform that is open to change and adds the possibility of throwing in additional tools to enhance research. They can then eliminate the problems involved with having their data in disparate formats that can complicate sharing by porting their results directly into standard Microsoft Office tools that can be shared and edited in real time via the web interface.

Taking R Beyond the Public Cloud

At the moment the resource can only be deployed using Amazon EC2 but this is simply a matter of how far Chine has traveled with his experiences—in theory, this can run on any resource. For instance, when he first began rolling out the prototype version of Elastic-R, he did so on the National Grid Services in the U.K. using a standard cluster, which would be possible on any other resource he might have selected.

The point is that what Chine has created is agnostic to the hardware and operating system, so users can connect to computational engines via their browsers, thus enabling to work with large-scale data that you don’t move, but can share with others for collaboration in real-time.

As Chine stated, “What’s wonderful about Amazon is that they already deliver the most significant public cloud of the moment, but also that they’ve blurred the frontier between normal computing and HPC…For the end user or interaction design perspective there’s no borderline between general computing and high-performance computing now.”

There are a range of capabilities that Elastic-R that are almost too numerous to mention in a relatively short article. In fact, this seems to be one of the reasons why this is such a disruptive technology; it’s multi-layered in its potential usefulness. Scientists and researchers can open mainstream computing environments beyond R (Scilab, SciPy, Sage, etc.) can issue commands to the remote R engne, install and deploy new packages, and easily run computationally-intensive algorithms virtually that are managed through the simple interface, then share all of it, including the computational resources themselves.

The following is from a slide out of the following deck (the presentation, which is the pptx file provides a more in-depth overview of the layers of the Elastic-R portal and what it provides) showing the onion-like way users can visualize their access to resources and tools.

During an interview with Karim Chine, I was granted access to the interface to watch how collaboration happens and how resources are secured. Without much experience at all, it was possible to understand intuitively understand exactly what was needed to get my job running, to indentify where the results were, who I could share them with and how at the exact same moment I updated a spreadsheet, my partner on the other side of the ocean could see my changes in real time. Real-time. There was no delay. The moment he replaced a “5” with a “6” on his end I saw it on my own browser screen.

This is big news for the future of scientific collaboration and computation using remote resources. 

A Business Model Still in the Making

Chine’s goals are multi-layered and go far beyond making R more accessible to greater numbers of researchers via the cloud—he hopes to create a “Facebook” for scientists and statisticians where they can share and collaborate with big data in real time using a simple interface that they can build applications on top of and add or shed layers of computational tools and resources seamlessly.

 As a social entrepreneur, Chine notes that this interface, as it develops, means that researchers in developing countries without access to high-performance computing resources can now easily create machine instances for small sums and even if those prices are too high, they can also share infrastructure with collaborating participants.

In essence, what this means is that there is not only an economy of information sharing involved with this disruptive innovation—there is an economic angle that allows researchers to extend their infrastructure to those across the world easily and in only a few moments.

As a business model, however, there are some issues that Chine admits he is still working to resolve. On the one hand, he sees the possibility of involving those who make scientific tools available, including The MathWorks, partnering in a revenue-sharing sense once those tools are integrated. He also sees value for supercomputing centers that might want to provide a simpler and more streamlined way to access and use high-performance computing infrastructure.

For now, however, he admits that he is just waiting to see how useful this will be as he extends his user base, which is currently only at 140 members—all of whom he knows personally. He will be announcing the technology just before SC10 as publicly available.

While the cloud can open the doors to enhanced collaboration and resource sharing as well as providing the tools researchers need, there is a remaining need for software that creates a sturdy bridge between the tools for scientific computation and the cloud, which is where Elastic-R fits into the picture.

Coupled with the open, collaborative nature of the project, which is driven by its social entrepreneur founder and creator, it will be thrilling indeed to watch how the community receives, uses, then builds on this disruptive innovation.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Scalable Informatics Ceases Operations

March 23, 2017

On the same day we reported on the uncertain future for HPC compiler company PathScale, we are sad to learn that another HPC vendor, Scalable Informatics, is closing its doors. Read more…

By Tiffany Trader

‘Strategies in Biomedical Data Science’ Advances IT-Research Synergies

March 23, 2017

“Strategies in Biomedical Data Science: Driving Force for Innovation” by Jay A. Etchings is both an introductory text and a field guide for anyone working with biomedical data. Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Google Launches New Machine Learning Journal

March 22, 2017

On Monday, Google announced plans to launch a new peer review journal and “ecosystem” Read more…

By John Russell

HPE Extreme Performance Solutions

HFT Firms Turn to Co-Location to Gain Competitive Advantage

High-frequency trading (HFT) is a high-speed, high-stakes world where every millisecond matters. Finding ways to execute trades faster than the competition translates directly to greater revenue for firms, brokerages, and exchanges. Read more…

Swiss Researchers Peer Inside Chips with Improved X-Ray Imaging

March 22, 2017

Peering inside semiconductor chips using x-ray imaging isn’t new, but the technique hasn’t been especially good or easy to accomplish. Read more…

By John Russell

LANL Simulation Shows Massive Black Holes Break ‘Speed Limit’

March 21, 2017

A new computer simulation based on codes developed at Los Alamos National Laboratory (LANL) is shedding light on how supermassive black holes could have formed in the early universe contrary to most prior models which impose a limit on how fast these massive ‘objects’ can form. Read more…

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Intel Ships Drives Based on 3D XPoint Non-volatile Memory

March 20, 2017

Intel Corp. has begun shipping new storage drives based on its 3D XPoint non-volatile memory technology as it targets data-driven workloads. Intel’s new Optane solid-state drives, designated P4800X, seek to combine the attributes of memory and storage in the same device. Read more…

By George Leopold

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

New Japanese Supercomputing Project Targets Exascale

March 14, 2017

Another Japanese supercomputing project was revealed this week, this one from emerging supercomputer maker, ExaScaler Inc., and Keio University. The partners are working on an original supercomputer design with exascale aspirations. Read more…

By Tiffany Trader

Nvidia Debuts HGX-1 for Cloud; Announces Fujitsu AI Deal

March 9, 2017

On Monday Nvidia announced a major deal with Fujitsu to help build an AI supercomputer for RIKEN using 24 DGX-1 servers. Read more…

By John Russell

HPC4Mfg Advances State-of-the-Art for American Manufacturing

March 9, 2017

Last Friday (March 3, 2017), the High Performance Computing for Manufacturing (HPC4Mfg) program held an industry engagement day workshop in San Diego, bringing together members of the US manufacturing community, national laboratories and universities to discuss the role of high-performance computing as an innovation engine for American manufacturing. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Leading Solution Providers

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This