Visit additional Tabor Communication Publications
September 30, 2010
NEW HAVEN, Conn., Sept. 29 -- The rules that govern the world of the very small, quantum mechanics, are known for being bizarre. One of the strangest tenets is something called quantum entanglement, in which two or more objects (such as particles of light, called photons) become inextricably linked, so that measuring certain properties of one object reveals information about the other(s), even if they are separated by thousands of miles. Einstein found the consequences of entanglement so unpalatable he famously dubbed it "spooky action at a distance."
Now a team led by Yale researchers has harnessed this counterintuitive aspect of quantum mechanics and achieved the entanglement of three solid-state qubits, or quantum bits, for the first time. Their accomplishment, described in the Sept. 30 issue of the journal Nature, is a first step towards quantum error correction, a crucial aspect of future quantum computing.
"Entanglement between three objects has been demonstrated before with photons and charged particles," said Steven Girvin, the Eugene Higgins Professor of Physics & Applied Physics at Yale and an author of the paper. "But this is the first three-qubit, solid-state device that looks and feels like a conventional microprocessor."
The new result builds on the team's development last year of the world's first rudimentary solid-state quantum processor, which they demonstrated was capable of executing simple algorithms using two qubits.
The team, led by Robert Schoelkopf, the William A. Norton Professor of Applied Physics & Physics at Yale, used artificial "atoms"--actually made up of a billion aluminum atoms that behave as a single entity -- as their qubits. These "atoms" can occupy two different energy states, akin to the "1" and "0" or "on" and "off" states of regular bits used in conventional computers. The strange laws of quantum mechanics, however, allow for qubits to be placed in a "superposition" of these two states at the same time, resulting in far greater information storage and processing power.
In this new study, the team was able to achieve an entangled state by placing the three qubits in a superposition of two possibilities--all three were either in the 0 state or the 1 state. They were able to attain this entangled state 88 percent of the time.
With the particular entangled state the team achieved, they also demonstrated for the first time the encoding of quantum information from a single qubit into three qubits using a so-called repetition code. "This is the first step towards quantum error correction, which, as in a classical computer, uses the extra qubits to allow the computer to operate correctly even in the presence of occasional errors," Girvin said.
Such errors might include a cosmic ray hitting one of the qubits and switching it from a 0 to a 1 state, or vice versa. By replicating the qubits, the computer can confirm whether all three are in the same state (as expected) by checking each one against the others.
"Error correction is one of the holy grails in quantum computing today," Schoelkopf said. "It takes at least three qubits to be able to start doing it, so this is an exciting step."
Other authors of the paper include Leonardo DiCarlo, Matthew Reed, Luyan Sun, Blake Johnson, Jerry Chow, Luigi Frunzio and Michel Devoret (all of Yale University); and Jay Gambetta (University of Waterloo).
Source: Yale University
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.