In classical computers, like the one I'm writing this on, information is stored in bits, which have a value of either 0 or 1. These bits can be combined in different ways to perform calculations. The time it takes to do a calculation depends on the number of steps, which in turn depends on the number of bits.

Quantum computers have quantum bits, or qubits, which represent entangled particles. That means the states of different qubits can depend on each other, and they can (temporarily) take on values between 0 and 1, representing a superposition of the two. This can be exploited to perform calculations exponentially faster than a classical computer could.

One of the exemplary problems where quantum computers are useful is prime factorization. This is the problem of finding all the prime numbers that divide a given number, which you can imagine as a tree:

Factoring 24 into 3x2x2x2 |

Why should we care about factoring numbers? Current encryption algorithms are based on the product of two large primes. You share the product, which others can use to encrypt a message for you, but the only way to decrypt it is to use the individual factors, which you keep secret. With quantum computers though, anyone can factor the product you gave them, and read your messages.

Google published a paper in the prestigious journal Nature claiming to have reached "quantum supremacy" (the originator of the phrase now regrets choosing it). The trouble with quantum systems is that they're fragile – Any disturbance from outside the system will affect the states of the qubits, invalidating the calculation. This risk increases with the number of qubits, so the question becomes, can we maintain the integrity of a system with enough qubits to be useful?

Quantum supremacy refers to the point where we can build quantum computers with the necessary number of qubits to do calculations that are impossible on classical computers. Google claims to have reached that point, with their 53-qubit Sycamore computer. They solved a problem in 200 seconds that they say would take a classical computer 10,000 years. IBM disputes this calculation, claiming that the classical computer would only take a few days.

It remains to be seen who is right, but it's still cool to see a project with such world-changing potential coming into being.

## No comments:

## Post a Comment