HPC Startup Backs into Cloud

By Nicole Hemsoth

August 29, 2011

Some could argue that it really does take a rocket scientist to address the limitations of high performance computing systems.

According to one rocket scientist we talked to recently, the problem with HPC clusters for aerospace engineers has little to do with the applications or the real science behind the rockets. Rather, he says the thorny issues are rooted in the limitations, complexity, and general care and feeding distractions that require scientists and non IT-experts to work double-duty as HPC system managers.  

Mike Colonno, a former aerospace engineer from SpaceX and current CTO at Black Sky Computing says that scientists, rendering artists, bioinformatics professionals and those in a wide range of other HPC verticals are spending too much time struggling with high performance computing complexities at the expense of their projects.

Colonno pointed to his experiences at SpaceX, claiming that in his daily routine of refining aerodynamic rocket designs, he was forced to spend 80 percent of his time handling IT-related issues versus focusing specifically on his application. His conversations with HPC users in other fields confirmed the suspicion that this wasn’t an aerodynamics industry problem—it was a problem for those who had to contend with cluster computing in any industry.

In a search for a solution to the HPC usability issue, Black Sky was born, at first with the mission to deliver ready-to-roll high performance cloud environments to suit the needs for the many he claimed were seeking hybrid HPC solutions. In short, Colonno says that now—and definitely over the next few years—the hybrid high performance cloud is? in high demand. He says that customers want to maintain their in-house resources that can be easily, seamlessly burst out into a cloud environment to meet time-sensitive demands without incurring vast IT headaches.

The founders of Black Sky, which include Scott Alexander, a former senior software engineer from PayPal, Colonno, and a third cofounder who had also been with SpaceX, found that the existing cloud computing solutions available from the likes of Amazon and others were not suitable for HPC—and that true cloud HPC couldn’t work without servers, storage and networks all pointing to those specific needs. The infrastructure supporting these environments could not, at least according to Colonno and Alexander, support the heavy I/O demands or provide the performance and price match needed for many HPC applications—so that means it’s time for any serious cloud company to build their own.

And this is where things get rather interesting—and at two different ends of the HPC spectrum. First, the company set about building a cloud offering that was comprised of purpose-built high performance computing gear from top to bottom; hyper-efficient servers, 40 gigabit Ethernet, and a robust storage array. While there’s nothing necessarily remarkable in that alone, it is worth noting that the team decided that if they didn’t build it themselves—from servers to storage to software—they couldn’t produce a truly HPC-ready offering.

For Colonno and his team, this led them into a backwards approach to getting into the cloud business. They started off hoping to find a niche by delivering highly focused, refined HPC cloud solutions that carved out anything unnecessary from the server and storage flank—and suddenly found themselves in the hardware business. The strange thing is that for now at least, that hardware push to support the main cloud objectives has been the source of their profitability while the cloud, called SkyNet (not that Skynet) continues to be gussied up during beta in time for real customers sometime late this year or early next.

At this build-versus-buy juncture, one might think going to a vendor like Dell with its DCS service might be the best alternative since a design tailored for HPC users is something they’ve specialized in designing in the past. Colonno says their experiences with the DCS team were excellent—that they were almost close to considering having Dell build the hardware for their cloud. However, the sticky issue was that Dell will provide outstanding support and assistance during the design process, but that ultimately, solutions developed with Dell’s assistance would be added to the Dell server portfolio.

Seeing the uniqueness of their approach to pure HPC-driven hardware, Colonno and Alexander said they made the decision to shed the third party and retain full control over the project. In other words, they simply designed and built their own hardware to support the efficiency, performance, and manageability layers required just for HPC users. Not only did a cloud company thus become a hardware company—that hardware company went on to deliver HPC-tuned systems that could double as physical datacenter resources or come ready-made to burst customers into the cloud on the server, storage and software stack sides.

Colonno said that the incentive to deliver highly customized HPC solutions stretched across multiple areas for users. First of all, he says that companies that cater to the middle market (consumer IT), yet have an HPC division, often have the expertise but the solutions they offer are not tailored, leaving in features that don’t matter for high performance computing. For instance, on the storage side, ripping out things that customers don’t need but that are standard offerings like hourly snapshots of read-only junk refine the offering. By putting the emphasis on I/O in the design, achieving full integration with one vendor for the server, storage and stack, and stressing density, the Black Sky founders say they found something no one else was offering.

Alexander said that one of the weaknesses of off-the-shelf systems (outside of the fact that too many users don’t understand the plethora of problems that can come after its been powered on) is a lack of recognition of basic HPC needs. And here their portfolio of hardware offerings was born, including the Apollo storage line, which sheds the superfluous and “hub and spoke” problem where users need dozens or thousands of computers reading and writing to a central source at full throttle only to be shot down at the bandwidth level. According to Alexander, volume and throughput are central concerns that are often not addressed with non-tailored solutions.

Colonno pointed to the same paring down process at the server level, pointing to Black Sky’s Hyperion line. He claims it took several incarnations and painful lessons learned to arrive at the fact that by focusing on the bare-bones essentials that balance efficiency with dramatic power, density could be increased and at performance and price point that customers could very easily live with.

The team addressed the middleware “glue” that makes management seamless, whether bursting or sticking to in-house resources. Using a custom blend of open source and in-house developed software originally developed to manage SkyNet, the team claims that they might not offer all the bells and whistles of a Platform LSF or Moab commercial solution, but they hand over everything needed to make system management a breeze—and to allow bursting into the cloud so easy that end users won’t even be it has happened. Colonno claims that one of their target markets, the rendering and visual effects industry, responds well to this, given their need for cloud “burstability,” , high performance and ease of use. The end users here are artists who are application gurus, but lack the IT sophistication needed to tame the HPC beasts of rendering.

Colonno’s humble statement that they’ve stumbled upon a “sweet spot” in computing is food for thought. By starting out with the plan to develop an HPC cloud of their own, their need for hyper-efficiency, density, cooling and storage was driven by their own economic interests. They wanted to run the cloud as cheaply as possible without sacrificing on performance. Therefore, the incentive to build the most cost-effective solution was paramount; the  urgency of the design was important to their bottom line.

The cloud itself could be attractive to those desiring a hybrid solution says Colonno, but the hardware itself also caters to the same requirements. 40 gigabit Ethernet, the ability to work with QDR Infiniband or 10 gigabit Ethernet, and efficiency sound enough that Black Sky is willing to bet its own dollars on it.

The fact that this “sweet spot” turned into a hardware portfolio that drew more interest than the cloud they were building it for was a happy accident—but one the team can certainly accept. While SkyNet is enjoying a productive beta, they have sold over a dozen systems and have a few case studies in rendering and aerospace under their belt already. When SkyNet launches later in the year, the team can put their design to the ultimate test—driving a profitable, efficient, performance-geared cloud that brings new business flocking.  Or so they hope.

Company chief Scott Alexander says that there are still challenges that are “man-made” however, even after some of the hardware challenges are met. Software licensing barriers with many companies being afraid to climb on board with anything that breaks the profitable per-node pricing model as well as general bad press around security and performance are preventing clouds from becoming more prominent in HPC verticals (ummm you’re channeling DFW again) However, he claims that once the software companies get on board en masse and clouds continue to develop a solid reputation, the sky is the limit and the “sweet spot” they stumbled upon could bear delicious fruits indeed.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Women Coders from Russia, Italy, and Poland Top Study

January 17, 2017

According to a study posted on HackerRank today the best women coders as judged by performance on HackerRank challenges come from Russia, Italy, and Poland. Read more…

By John Russell

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

January 17, 2017

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur. Read more…

By Doug Black

Weekly Twitter Roundup (Jan. 12, 2017)

January 12, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

NSF Seeks Input on Cyberinfrastructure Advances Needed

January 12, 2017

In cased you missed it, the National Science Foundation posted a “Dear Colleague Letter” (DCL) late last week seeking input on needs for the next generation of cyberinfrastructure to support science and engineering. Read more…

By John Russell

HPE Extreme Performance Solutions

Remote Visualization: An Integral Technology for Upstream Oil & Gas

As the exploration and production (E&P) of natural resources evolves into an even more complex and vital task, visualization technology has become integral for the upstream oil and gas industry. Read more…

NSF Approves Bridges Phase 2 Upgrade for Broader Research Use

January 12, 2017

The recently completed phase 2 upgrade of the Bridges supercomputer at the Pittsburgh Supercomputing Center (PSC) has been approved by the National Science Foundation (NSF) making it now available for research allocations to the national scientific community, according to an announcement posted this week on the XSEDE web site. Read more…

By John Russell

Clemson Software Optimizes Big Data Transfers

January 11, 2017

Data-intensive science is not a new phenomenon as the high-energy physics and astrophysics communities can certainly attest, but today more and more scientists are facing steep data and throughput challenges fueled by soaring data volumes and the demands of global-scale collaboration. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

UberCloud Cites Progress in HPC Cloud Computing

January 10, 2017

200 HPC cloud experiments, 80 case studies, and a ton of hands-on experience gained, that’s the harvest of four years of UberCloud HPC Experiments. Read more…

By Wolfgang Gentzsch and Burak Yenier

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

January 17, 2017

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on the current Top500 list, and DDN, a leader in high-end storage, have announced a joint sales and marketing agreement to produce solutions based on DDN storage platforms integrated with servers, networking, software and services from Inspur. Read more…

By Doug Black

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

UberCloud Cites Progress in HPC Cloud Computing

January 10, 2017

200 HPC cloud experiments, 80 case studies, and a ton of hands-on experience gained, that’s the harvest of four years of UberCloud HPC Experiments. Read more…

By Wolfgang Gentzsch and Burak Yenier

A Conversation with Women in HPC Director Toni Collis

January 6, 2017

In this SC16 video interview, HPCwire Managing Editor Tiffany Trader sits down with Toni Collis, the director and founder of the Women in HPC (WHPC) network, to discuss the strides made since the organization’s debut in 2014. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Fast Rewind: 2016 Was a Wild Ride for HPC

December 23, 2016

Some years quietly sneak by – 2016 not so much. It’s safe to say there are always forces reshaping the HPC landscape but this year’s bunch seemed like a noisy lot. Among the noisemakers: TaihuLight, DGX-1/Pascal, Dell EMC & HPE-SGI et al., KNL to market, OPA-IB chest thumping, Fujitsu-ARM, new U.S. President-elect, BREXIT, JR’s Intel Exit, Exascale (whatever that means now), NCSA@30, whither NSCI, Deep Learning mania, HPC identity crisis…You get the picture. Read more…

By John Russell

AWI Uses New Cray Cluster for Earth Sciences and Bioinformatics

December 22, 2016

The Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI), headquartered in Bremerhaven, Germany, is one of the country's premier research institutes within the Helmholtz Association of German Research Centres, and is an internationally respected center of expertise for polar and marine research. In November 2015, AWI awarded Cray a contract to install a cluster supercomputer that would help the institute accelerate time to discovery. Now the effort is starting to pay off. Read more…

By Linda Barney

Addison Snell: The ‘Wild West’ of HPC Disaggregation

December 16, 2016

We caught up with Addison Snell, CEO of HPC industry watcher Intersect360, at SC16 last month, and Snell had his expected, extensive list of insights into trends driving advanced-scale technology in both the commercial and research sectors. Read more…

By Doug Black

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

Leading Solution Providers

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

New Genomics Pipeline Combines AWS, Local HPC, and Supercomputing

September 22, 2016

Declining DNA sequencing costs and the rush to do whole genome sequencing (WGS) of large cohort populations – think 5000 subjects now, but many more thousands soon – presents a formidable computational challenge to researchers attempting to make sense of large cohort datasets. Read more…

By John Russell

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

  • arrow
  • Click Here for More Headlines
  • arrow
Share This