From Rendering LOTR to Animating HPC Clouds

By Nicole Hemsoth

September 16, 2013

Much has been written about the incredible animation work that went on behind the scenes of the Lord of the Rings series, but without the rendering horsepower of high performance systems, none of that would have been possible. It took an entire “supercomputing center” and unique cloud build-out for the team to continue—a process that led to the creation of Greenbutton—one of the few companies that identify as HPC cloud oriented.

In this audio interview, HPCwire editor, Nicole Hemsoth, talked with Scott Houston, the former CIO for WETA Digital, which provided the IT power infrastructure that powered the stunning visual effects for the films. Scott is currently the CEO of Greenbutton, which spun out of this work.

HPCwire: Scott, we talked back in 2011 at the Supercomputing conference, about many of your experiences on the Lord of the Rings films. Can you give us a sense of the background, what some of the decision making processes were when they were looking at different IT solutions?

Houston: It’s really been an interesting journey for me. In fact, back in 2003, the cloud really wasn’t available, so we literally had to go out and buy a thousand processors for one particular shot – it’s the battle of Pelennor Fields. I still have shivers up my spine every time I see that shot. It’s the shot that’s actually midway through the Return of the King, and it’s when the 14,000 rides of the Rohirrim (the horse warriors), clash on the battlefield with 83,000 orcs.

And the reality is that even though we had four data centers and clustered two and a half thousand processors, we literally had to go out and buy another thousand processors and build a brand new data center just to get that one shot done right at the end of the production of the movie. It was at that time, at the conclusion of the movie back in 2003, and early 2004 that I realized that there had to be a better way. And in fact, that infrastructure was sitting idle at the end of the movie. The next production for WETA was King Kong, so we didn’t really need those extra thousand processors.

So I formed a consortium called the New Zealand Supercomputer center, and we rented out time in that. We did some biotech work, we did some seismic processing, we helped render the movie Happy Feet. But the problem was for a number of our users, they just wanted to run small jobs, so that’s really where the genesis of green button was born. I saw an opportunity in the market to create some software to automate the process of using this capacity on demand, well before the cloud came around.

And that’s been great and a fantastic story to close this loop is that today we’re working with a company in Mexico that is rendering a full feature animation – a full length movie – that is rendered entirely on the cloud. So in the future CIOs and CTOs that are making movies and doing large computationally intensive projects, may not necessarily have to go out and buy that capacity in the future.

HPCwire: What is the status of the New Zealand Supercomputer center now? Are you running a balance between research/scientific workloads and entertainment/media workloads? Where does that stand?

Houston: That environment was shut down in 2009 and we’ve been cloud only since then. So today, we have jobs running on Windows Azure, Amazon, vCloud environments, and recently, earlier this year we announced support for Open Stack.

HPCwire: Since you’ve been heavy users from the beginning of both Amazon and Azure, what do you think some of the differences are – advantages and disadvantages between those platforms? Where is the advantage of Amazon over Azure, and Azure over Amazon?

Houston: Good question. We have customers, I think at last count in 77 different countries, so in many cases it just comes down to geography – where is the nearest data center and how fat can I have – you know – what is bandwidth of the pipe to get to that datacenter. So a number of those decisions are driven by where the nearest datacenter is.

Almost invariably the technical decisions – so from the processing point of view, from the support point of view…We do have workloads that run on Amazon and primarily one of them is a seismic processing on demand service that we had called Cloud Claritas, and that’s running on Amazon primarily because Amazon supports 10 gigabit Ethernet. The application is MPI-based. So that has driven that decision based on the technical requirements.

Sometimes it’s come down to memory. So there’s new large memory instances on Azure, now. Some of those workloads were running on Amazon, and some of those have been ported over to Azure. There’s a company that has just been setup called ProfitBricks, and they have InfiniBand support, and today not many other cloud providers have InfiniBand, so we’re starting to see some workloads running there.

To answer your question, often it will come down to the geographic location of the customer, and then also the technical performance, or the requirements of the workload.

HPCwire: If you’re talking about seismic processing for instance, not only is that computationally intensive, it’s data intensive. If you’re talking about using public cloud resources, I would imagine that the data movement costs would be pretty significant – how do you balance that out?

Houston: Good question. The reality is – and I’m embarrassed to say this – when we’re talking about moving 20 or 30 terabytes, which is a large scale 3D seismic processing run – we literally still ship the drives. You can’t beat the bandwidth of a Phoenix truck to be honest.

Interestingly enough, we’ve developed a product, which is part of our cloud fabric product called Cloud Sync. It’s a downloadable tool that in the background will move the data to the cloud. We just added an FTP capability that will use UDP protocols and do bulk upload. We’ve just done some recent testing, and we’re getting close to 2.4 gigabits per second. That’s theoretical performance, but that means I could technically, in the right environment, move a terabyte of data in under an hour, with the right connection from the customers site to the cloud. That may be – that will start to be a game changer.

I think we’re starting to see increased bandwidth into the datacenter – there are more cloud datacenters, and so I think that will be overcome.  Not today, or tomorrow, or even this year, but over the next couple of years, I think the bandwidth problem will be solved. The whole concept of GreenButton is to push the green button from within your desktop application, and your job will start running. Well, clearly we need to get the data there, so Cloud Sync will manage that data synchronization in the background so that when you push the button, hopefully all of the files are there and we just do a quick synchronization and the job will start running.

HPCwire: That problem you mentioned is one that everyone is trying to tackle – it seems pretty tricky.  The theoretical performance you just cited there is pretty impressive. What’s the actual performance based on some use cases?

Houston: Well, we’ll be launching the product later this month, so we’ll probably go back out. The reality is, it will clearly depend on what sort of connection the customer has from their network provider into the datacenter. Many of the datacenters have large pipes into them, so it’s actually going to come down to the performance from the customer to the datacenter. But we’ve just started working with a company that is putting in a dedicated pipe between their facility and their cloud provider, so I would think in a couple of months we will have some real world use cases.

HPCwire: That’s interesting. We’ll keep our eyes open for it. Let’s talk about GreenButton the company for a second. Let me sure I have this straight – you’re a platform as a service company dedicated specifically to HPC workloads. I know today you had an announcement around higher end analytics services. Can you describe the company’s focus in terms of the types of applications and needs that you’re serving specifically, and what is it that makes GreenButton unique and distinctly HPC oriented.

Houston: Often it depends on who we’re talking to. We’re in this growth stage, and HPC hasn’t been sexy. It’s always been sexy to me, of course, but it hasn’t been particularly sexy in the marketplace. The growth in the industry hasn’t been particularly investable, and I know that we have an HPC audience, but – The interesting thing for us is that we don’t just focus on rendering seismic, genomic sequencing, Monte Carlos, and CFD.  A number of our customers – and we just had an announcement today – one of our largest customers is in the social media space. So really what we look at at Green Button is any type of workload, but it has to be computationally intensive.

Interestingly enough, we’re doing a lot of work on video indexing. So we’re taking an hour of video content in, and using key algorithms and processing that video and making that indexable. And that’s not a traditional HPC type workload, but it is a big compute workload.  And that’s a product that we’ve created called inCus that uses an algorithm called Microsoft called MAVIS. So it’s not just traditional HPC, we’ve got social media customers, we’re processing video, yes we’re rendering movies and we’re doing seismic processing work, but we’ve also been putting some work into running Hadoop workloads and big data analytics through our GreenButton cloud fabric engine as well. So for us, it’s not just a particular vertical market, it’s anything that is computationally intensive, or big data intensive, we can take that workload, run it on GreenButton cloud fabric and deploy that workload to any cloud platform.

HPCwire: That comment that HPC’s definition is expanding beyond the research, scientific computing workloads – that’s getting more and more common as data intensive / computationally intensive sort of merge. But when it comes to cloud computing, especially if you’re using a public cloud resource, the concerns really don’t change – they get more intense actually whether it’s computationally intensive or data intensive – where there’s a performance gap, and so people that require high, high performance, have to really, I would imagine, take a very close look at whatever cloud service they’re going to use because they’re taking a pretty big hit in the virtualization side – data movement side. How do you help customers work through that, and how do you prove the ROI of this over just buying a bunch of infrastructure, which is a big up front cost.

Houston: I think that’s an ongoing process. The reality is, from early on, we’ve talk to customers, because they’ll run a job on their local server environment, then they’ll run it on the cloud and they’ll go, it runs slower. And there’s no denying that. One, we have to get the data up there, and there’s nothing like a dedicated environment that is running in their own datacenter.

That said, we’re seeing significant investment from the cloud providers in their infrastructures. So while there may never be parity in running the job offsite, from a high performance dedicated environment on site – from a financial perspective, it really does make sense. I think that the sort of workloads that we’re talking about, it would be a very brave CIO that’s says “I’m going all into the cloud.” I’ve been in the cloud for 10 years, and I’m not sure that I would say it in their position.

The reality is that the cloud makes a heck of a lot of sense to take specific workloads that either take a long time, or they’re very bursty – they’re only project based – and use the cloud to scale out the business.  So I’d encourage IT managers to use the cloud to support either very intensive workloads that they really don’t have the bandwidth or capability to run in-house, and not necessarily just go all into the cloud.  We’d love to talk to those folks, but I think we’re still really a couple years away from the cloud being all encompassing and customers not needing datacenters anymore.

To give you an example of that, a number of our customers are using Green Button to render their jobs. And I did say one of them rendered an entire movie, but most of our customers will have really hard shots that the render jobs will run often for a matter of days, and they’re able to take those jobs, run them offsite on the cloud, and continue using their data center more efficiently for smaller jobs or the jobs that they need a quick turnaround on.

HPCwire: To go back to some of the use cases we talked about earlier, you said you’re seeing a lot of growth on the social media side, on the analytics side, big data – whatever that encompasses – it’s very big – what are some of the emerging markets that you think balance computationally intensive workload needs with whatever advantages the cloud can offer. Where is the real hot growth right now?

Houston: it’s a little bit of both. We can’t talk about this customer publically, but they’re a Fortune 500 manufacturing customer – they have a large Monte Carlo simulation that was running in their internal cluster and taking 3 hours to run. We are able to take those workloads using Green Button Cloud Fabric, and spin up 1,100 processors and have that job completed at under 10 minutes. And that’s a traditional HPC workload, and just the power of our spinning up 100 processors and running that job and getting it turned around in 10 minutes is actually quite transformational for this customer.

We’re certainly seeing great growth in traditional HPC workloads – biotech, seismic, rendering. We’re starting to see CFD workloads – we have a customer in Germany that is doing fire dynamics simulation and that’s a traditional HPC MPI based workload and they’re running those jobs in the cloud.

But also balancing that are some really interesting opportunities around video and social media. The press release you’ll see on our website today is with a company called Tout, and effectively they provide 15 second video “tweets” (if you like) through the mobile phone, and they’re using Green Button to provide analytics for those customers – what sort of things they’re talking about, what are the themes, what are the ideas, what are the hot subjects.

So  balancing out the traditional HPC workloads are a whole lot of new workloads that are computationally intensive but not viewed as traditional HPC.

HPCwire: In the case like the manufacturing one you cited – cloud pricing is already tricky enough to figure out for a lot of users – how does this layer factor into pricing. How do you work this out?

Houston: The good news is that it’s pretty darn cheap. This particular workload – just think about the economics of it – if I’m spinning up 1,100 processors, most cloud providers are charging by the minute, and I want to run a job, and I’m ten minutes. So that’s $20 or $30 dollars to run the job. So to take that turnaround from under 3 hours to 10 minutes, and it costs $20 or $30 dollars to run a job – and they’ll negotiate that with their cloud provider – that’s a pretty compelling return on investment or business case to make.

I think the challenge is for the cloud provider in that the processing is so darned attractive for the end user. The end user just pays for the compute and storage they use for a particular job. It is becoming more and more compelling for many customers. The challenge for the cloud provider is how will they make money out of that. And if you’ve seen what’s happening in the cloud, you’ve seen the transition of the SAAS type applications, or hosting websites, and you know the cloud makes good sense for that. Increasingly organizations are using the cloud for offsite storage, and that makes sense.

None of those are particularly profitable for the cloud providers. The real money and profit for the cloud providers is in on demand computational work. The challenge for them is around utilization. If they can ensure their cloud is fully utilized, or even 80% utilized, it’s a profitable business, so how do you attract enough customers to keep that utilization. And it’s a win-win if you can do it from a customer point of view – from only using the cloud 10 or 20 or 30 percent of my time – it just makes sense. In fact the analytics that we’re doing right now with current cloud pricing, it’s almost 50%. If I built a datacenter and I’m running at less than 50% of the time, depending on my licensing costs, it may well be cheaper to run those jobs on the cloud with the current pricing model. So that’s an interesting tipping point that I’m sure most CIOs and CFOs are considering at the moment.

HPCwire: Right. So as with everything in high end infrastructure, it all depends.

Houston: I don’t think you want to argue that the pricing is compelling. The biggest problem, and the problem that we’ve been really focused on is governance. Anybody that is running their jobs in the cloud – hopefully they’re having a good experience. What is a really bad experience for people is not the costs, but where to appoint the costs.

If you get a bill from a cloud provider today – and it doesn’t matter who it is, whether it’s Amazon or Microsoft or anybody – it’s like a phone bill.  You’ve got no idea who ran the jobs, what department they’re working or what project that was.  Did they have the authorization to run the jobs? I’m sure most organizations are running their jobs on the cloud and they’re using the company credit card.

So for an IT department and a finance department, wrangling with those costs and apportioning that to a user, to a project, to a department – it’s an absolute nightmare; so we’ve been working very hard at providing governance. One of the things that we’ve just recently been granted a patent on it, is our ability to profile a job and provide an SLA commitment in time and cost. We think that’s unique and I think that’s going to be the game changer and perhaps the tipping point for broader utilization in the cloud is how long will it take, how much will it cost, and then really apportioning that to the correct department, user, project.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

US Exascale Computing Update with Paul Messina

December 8, 2016

Around the world, efforts are ramping up to cross the next major computing threshold with machines that are 50-100x more performant than today’s fastest number crunchers.  Read more…

By Tiffany Trader

Weekly Twitter Roundup (Dec. 8, 2016)

December 8, 2016

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

Qualcomm Targets Intel Datacenter Dominance with 10nm ARM-based Server Chip

December 8, 2016

Claiming no less than a reshaping of the future of Intel-dominated datacenter computing, Qualcomm Technologies, the market leader in smartphone chips, announced the forthcoming availability of what it says is the world’s first 10nm processor for servers, based on ARM Holding’s chip designs. Read more…

By Doug Black

Which Schools Produce the Top Coders in the World?

December 8, 2016

Ever wonder which universities worldwide produce the best coders? The answers may surprise you, at least as judged by the results of a competition posted yesterday on the HackerRank blog. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines. Read more…

By John Russell

DDN Enables 50TB/Day Trans-Pacific Data Transfer for Yahoo Japan

December 6, 2016

Transferring data from one data center to another in search of lower regional energy costs isn’t a new concept, but Yahoo Japan is putting the idea into transcontinental effect with a system that transfers 50TB of data a day from Japan to the U.S., where electricity costs a quarter of the rates in Japan. Read more…

By Doug Black

Infographic Highlights Career of Admiral Grace Murray Hopper

December 5, 2016

Dr. Grace Murray Hopper (December 9, 1906 – January 1, 1992) was an early pioneer of computer science and one of the most famous women achievers in a field dominated by men. Read more…

By Staff

Ganthier, Turkel on the Dell EMC Road Ahead

December 5, 2016

Who is Dell EMC and why should you care? Glad you asked is Jim Ganthier’s quick response. Ganthier is SVP for validated solutions and high performance computing for the new (even bigger) technology giant Dell EMC following Dell’s acquisition of EMC in September. In this case, says Ganthier, the blending of the two companies is a 1+1 = 5 proposition. Not bad math if you can pull it off. Read more…

By John Russell

US Exascale Computing Update with Paul Messina

December 8, 2016

Around the world, efforts are ramping up to cross the next major computing threshold with machines that are 50-100x more performant than today’s fastest number crunchers.  Read more…

By Tiffany Trader

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines. Read more…

By John Russell

Ganthier, Turkel on the Dell EMC Road Ahead

December 5, 2016

Who is Dell EMC and why should you care? Glad you asked is Jim Ganthier’s quick response. Ganthier is SVP for validated solutions and high performance computing for the new (even bigger) technology giant Dell EMC following Dell’s acquisition of EMC in September. In this case, says Ganthier, the blending of the two companies is a 1+1 = 5 proposition. Not bad math if you can pull it off. Read more…

By John Russell

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

HPE-SGI to Tackle Exascale and Enterprise Targets

November 22, 2016

At first blush, and maybe second blush too, Hewlett Packard Enterprise’s (HPE) purchase of SGI seems like an unambiguous win-win. SGI’s advanced shared memory technology, its popular UV product line (Hanna), deep vertical market expertise, and services-led go-to-market capability all give HPE a leg up in its drive to remake itself. Bear in mind HPE came into existence just a year ago with the split of Hewlett-Packard. The computer landscape, including HPC, is shifting with still unclear consequences. One wonders who’s next on the deal block following Dell’s recent merger with EMC. Read more…

By John Russell

Why 2016 Is the Most Important Year in HPC in Over Two Decades

August 23, 2016

In 1994, two NASA employees connected 16 commodity workstations together using a standard Ethernet LAN and installed open-source message passing software that allowed their number-crunching scientific application to run on the whole “cluster” of machines as if it were a single entity. Read more…

By Vincent Natoli, Stone Ridge Technology

IBM Advances Against x86 with Power9

August 30, 2016

After offering OpenPower Summit attendees a limited preview in April, IBM is unveiling further details of its next-gen CPU, Power9, which the tech mainstay is counting on to regain market share ceded to rival Intel. Read more…

By Tiffany Trader

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

Think Fast – Is Neuromorphic Computing Set to Leap Forward?

August 15, 2016

Steadily advancing neuromorphic computing technology has created high expectations for this fundamentally different approach to computing. Read more…

By John Russell

ARM Unveils Scalable Vector Extension for HPC at Hot Chips

August 22, 2016

ARM and Fujitsu today announced a scalable vector extension (SVE) to the ARMv8-A architecture intended to enhance ARM capabilities in HPC workloads. Fujitsu is the lead silicon partner in the effort (so far) and will use ARM with SVE technology in its post K computer, Japan’s next flagship supercomputer planned for the 2020 timeframe. This is an important incremental step for ARM, which seeks to push more aggressively into mainstream and HPC server markets. Read more…

By John Russell

IBM Debuts Power8 Chip with NVLink and Three New Systems

September 8, 2016

Not long after revealing more details about its next-gen Power9 chip due in 2017, IBM today rolled out three new Power8-based Linux servers and a new version of its Power8 chip featuring Nvidia’s NVLink interconnect. Read more…

By John Russell

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

Leading Solution Providers

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Intel Launches Silicon Photonics Chip, Previews Next-Gen Phi for AI

August 18, 2016

At the Intel Developer Forum, held in San Francisco this week, Intel Senior Vice President and General Manager Diane Bryant announced the launch of Intel's Silicon Photonics product line and teased a brand-new Phi product, codenamed "Knights Mill," aimed at machine learning workloads. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Micron, Intel Prepare to Launch 3D XPoint Memory

August 16, 2016

Micron Technology used last week’s Flash Memory Summit to roll out its new line of 3D XPoint memory technology jointly developed with Intel while demonstrating the technology in solid-state drives. Micron claimed its Quantx line delivers PCI Express (PCIe) SSD performance with read latencies at less than 10 microseconds and writes at less than 20 microseconds. Read more…

By George Leopold

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This