Classical computers are basically big calculators working with electric voltages and currents that represents the numbers being operated and their results. Up to around the WW2, computers used to represent these numbers ranging from 0 to 9 according to the voltage being perceived in the circuit, but from around 1950 and later we started working with the binary system, that is, we use only 0 and 1 to represent any number (using numeric base conversion), this way computers could get as small as they are today.
Transistors are components that can store 2 states: on or off, or simply 1 and 0. When we combine several of these transistors, we can work with more numbers and make more complex operations in parallel, making computers more efficient. There’s a rule called the Moore’s Law that says we double the name of transistors in Integraded Circuits (IC) every 18 months. This law was proposed in 1965 and it still holds nowadays.
Yet, we’re getting to a point where we’re reaching the physical limit of silicon-based ICs, so we’ll need to seek an alternative to continue upgrading computer powers to keep up with the demand of people, research and companies. The most promising alternative nowadays is the called Quantum Computing (which we already introduced in a previous post). The objective of quantum computers is to take advantage of the properties of quantum mechanics to make computers more efficient.
One important property is the Superposition of states discribed by Schrödinger in the first half of the XX century: it says that particles in the quantum level (around Planck’s constant) can be in both opposing states in the same time. That means: one particle representing the numbers in a computer can be in both 0 and 1 state at the same time. It makes possible for computers to make more than one calculation at the same time using less representing elements. Those superposition values are the called “QuBits”.
The detail is: those positions are probabilistic, not deterministic. That is: we have a probability of being in 0 or being in 1, ranging from 0% to 100% and being oppositive probabilities (they both sum to 100%, always). So we have an “uncertainty” of the states we are making the calculations, which can cause the same calculations to end in different results. It means we need to have some mechanism to reduce this error probability and make the computations more precisely. Afterall, nobody wants a calculator that can’t give you the right results.
That’s all for now, folks!
That’s all for now, next post I’ll talk a little more about represanting QuBits and making operations on them.