Visit additional Tabor Communication Publications
May 26, 2011
On Wednesday, D-Wave Systems made history by announcing the sale of the world's first commercial quantum computer. The buyer was Lockheed Martin Corporation, who will use the machine to help solve some of their "most challenging computation problems." Lockheed purchased the system, known as D-Wave One, as well as maintenance and associated professional services. Terms of the deal were not disclosed.
D-Wave One uses a superconducting 128-qubit (quantum bit) chip, called Rainier, representing the first commercial implementation of a quantum processor. An early prototype, a 16-qubit system called Orion, was demonstrated in February 2007. At the time, D-Wave was talking about future systems based on 512-qubit and 1024-qubit technology, but the 128-qubit Rainier turned out to be the company's first foray into the commercial market.
According to D-Wave co-founder and CTO Geordie Rose, D-Wave One, the technology uses a method called "quantum annealing" to solve discrete optimization problems. While that may sound obscure, it applies to all sorts of artificial intelligence-type applications such as natural language processing, computer vision, bioinformatics, financial risk analysis, and other types of highly complex pattern matching.
We asked Rose to describe the D-Wave system and the underlying technology in more detail.
HPCwire: In a nutshell, can you describe the machine and its construction?
Rose: The D-Wave One is built around a superconducting processor. The processor is shielded from noise using specialized filtering and shielding systems that ensure that the processor's environment is extremely quiet, and is cooled to almost absolute zero during operation. The entire system's footprint is approximately 100 square feet.
While there is a substantial amount of exotic technology inside the D-Wave One, the system has been built to require very little specialized knowledge to operate. Users interact with the system via an API that allows the D-Wave One to be accessed remotely from a variety of programming environments, including Python, Java, C++, SQL and MATLAB.
HPCwire: What is "quantum annealing?"
Rose: Quantum annealing is a prescription for solving certain types of hard computing problems. In order to run quantum annealing algorithms, hardware that behaves quantum mechanically — such as the Rainier processor in the D-Wave One — is required. Quantum annealing is conceptually similar to simulated annealing and genetic algorithms, but is much more powerful.
HPCwire: Can you prove that quantum computing is actually taking place?
Rose: This was the question we set out to prove with the research published in the recent edition of Nature. The answer was a conclusive "yes."
HPCwire: How much power is required to run the machine?
Rose: The total wall-plug power consumed by a D-Wave One system is 15 kilowatts. This power requirement will not change as the processors become more powerful over time.
HPCwire: How much does D-Wave One cost?
Rose: Pricing for D-Wave One is consistent with large-scale, high-performance computing systems.
HPCwire: What kinds of problems is it capable of solving? Have you demonstrated any specific algorithms?
Rose: We have used the D-Wave One to run numerous applications. For example, we used the system to solve optimization problems arising from building software that could detect cars in images. This process outputs software that can be deployed anywhere — mobile phones, for example. The software the D-Wave One system wrote, with collaborators from Google and D-Wave, was among the best detectors of cars in images ever built. It is discussed at http://googleresearch.blogspot.com/2009/12/machine-learning-with-quantum.html.
HPCwire: What's next?
Rose: This is a very significant time in the history of D-Wave. We've sold the world's first commercial quantum computer to a large global security company, Lockheed Martin. That's a real milestone for us. We are excited to work with Lockheed and future customers to tackle complex problems traditional methods cannot resolve. Last week we were validated on the science side by Nature and this week, on the business side, by the sale of our quantum computer to this Fortune 500 company.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.