Without the possibility of revolutionary new shifts in large-scale computing on the hardware horizon, an increasing amount of attention is being turned to various modes of optimization. These efforts extend beyond tweaking codes for HPC systems into the realm of using detailed, holistic performance data to maximize performance and efficiency across existing architectures. According to
“If you think you understand quantum physics, you don’t understand quantum physics.” — Richard Feynman, Quantum Theorist The first commercial quantum computer was pioneered by Canadian firm D-Wave Systems, which unveiled its first prototype, a 16-qubit superconducting adiabatic quantum processor, in 2008. This novel type of superconducting processor uses quantum mechanics to massively accelerate computation.
Most of the efforts to address the problem of shrinking transistor geometries have focused on making the devices behave more precisely. But what if instead of trying to make the transistors better, we purposefully try to make them worse. Although it sounds counter-intuitive, developing processors that are naturally error-prone is exactly what one team of researchers has set out to do.