At the most basic level, modern computers perform their calculations using a number system of only two values per digit (0 and 1), which...
What are the advantages of the binary system rather than the decimal system? Does it improve computing speed?
At the most basic level, modern computers perform their calculations using a number system of only two values per digit (0 and 1), which stands in contrast to the decimal system which most people use in their daily lives, using ten values per digit (0-9).
In the abstract, there is no reason why one number system need necessarily be used over another in computing. In fact, Charles Babbage’s famous mechanical computer was designed using the decimal system and researchers have also experimented with ternary systems in the past, which use three values instead of two. However, the physical technology that makes up the circuitry of computer processors has developed in a way that is natively digital; that is, they operate in one of two states – on or off. These systems can perform basic logical operations (known as bitwise operations) very quickly. It is these operations which make up the building blocks of modern computing, despite their incongruence with our standard decimal expectations.
Regardless of the mismatch with our everyday number usage, binary has come out on top of any potential alternatives largely because of the nature of its physical implementation. While it is theoretically possible to create machines with more basic states (such as optical or quantum computers) and a set of logical operations that would suit those machines, these are not currently mature enough for practical applications.