Basic Concepts of Quantum Computing

Most phoenomenon we experience in our everyday lives, if not all, are governed by classical mechanics: from astronomical phonemonons, like sunsests or eclypses, to the mechanics of eletrical devices such as a television or a microwave oven. This is due to the fact that we live in a macro-world, and thus, are not aware of the laws that govern the dimensions of the atomic or even of the subatomic: the laws of the quantum world.

A classical computer (like the one you’re most likely reading this article with), works with basis on classical physics. This means that it generates an electrical current, whose voltage defines if that same current represents a 0 or a 1. That impulse is then conducted through a series of logic gates, and throught various combinations of countles impulses, the result is then shown on the display of the computer. We conclude, therefore, that at a physical level, the behaviour of the computer is well defined by electromagnetic phoenomenon, which are thoroughly explained by the Maxwell equations.

In the beginning of the 20th century, however, there was a revolution in physics: the quantum revolution. Through the efforts of people such as Max Planck, Erwin Schrödinger, Werner Heisenber, Niels Bohr, Louis de Broglie and Paul Dirac, we are now aware of some of the rules that govern the quantum world, such as non-localisation, superposition of quantum states, quantum entanglement and quantum fluctuations.

When these laws are applied to computation, we find that we obtain computational power that is exponentially greater than that which classical computing allows. It is how these phoenomenon work and how they can be applied to computation that we are going to explore in this page.