What is a bit?

Bits and pieces about the bit
We've all heard about bits, megabytes, megabits, etc. But what do these terms really mean? Well, let's go back to the basics.

In computer science and information theory, a binary digit (bit for short) is defined as the smallest possible piece of information we can work with, being one of the two possible digits in a binary number system. Depending on what kind of system we are working with, a bit can be represented or stored in different ways. Since a bit can only have two values, effectively, any system that can display two values can store this information.

The history of the bit dates back to the 17th century, when Francis Bacon, an English philosopher and scientist, devised a way to encrypt information by turning every letter in a message into a series of five letters (A or B).

The concept of the bit later developed, through the use of punched cards and perforated paper tape, on which the marked positions could be either punched through or not, so in essence every position carries one bit of data.

In the 19th century, with the development of the telegraph system, the Morse code was developed as a means of communication using this telegraph system. Since all the letter transmitted in Morse code consist of either dots or dashes, every key in the Morse code is essentially a bit.

Another simple example is a light bulb. It can be either turned on, or off. If we assign the bit's possible values to the light bulb, we effectively have a way to store this information. Of course, one light bulb can store very little information, and we would need a series of thousands, even millions of light bulbs to store any meaningful amount of information, and that's where computers come into play.

RAM, short for Random Access Memory, is a computer's primary data storage. Essentially, RAM works much faster than a hard drive, so any data you are currently working on is stored there. The RAM itself is made up out of bistable gates, also known as flip flops. Yes, like the footwear. For all intents and purposes, a flip flop is in essence a light bulb. This is the reason everything you worked on gets lost when you lose power, as the flip flops can't hold the voltage that signify a bit.

Of course, one bit is very little information to work with, and that is where the byte comes into play. A byte is, most often, a series of eight bits. Historically, there has been confusion about the size of a byte, but in recent years, due to agreements between the major hardware vendors, the size of the byte has been agreed upon to be a fixed number of 8 bits. Since one byte contains 8 bits, all of which can have 2 values, one byte can have a total number of 256 different values, depending on the bit combination. When converted into the decimal number system, the minimum value of a byte is 0, and the maximum 255.

To put this information into perspective. Each pixel on your screen is a piece of information. This information consists of three bytes. Each of these bytes presents a value between 0 and 255, assigned to the colors red, blue and green. This gives a total number of over 16 million colors we can get. This means that in every moment, a screen of standard resolution of 1920x1080 pixels displays information equal to 6220800 bytes, or about 6 megabytes of data. Given the standard movie frame rate is 24 frames per second, this means that watching a movie on your computer, your monitor effectively displays a total number of 149299200 bytes per second. A standard computer game today displays over 60 frames per second.

Another major confusion is the conversion into larger units. In information theory, we convert bits into bytes by dividing the number by 8. We convert bytes into kilobytes by dividing the number of bytes by 1024 (210); kilobytes into megabytes by further dividing that number by 1024, and so on. In commercial areas, the number of kilobytes is calculated by dividing the number of bytes by 1000, and so on. This is why the effective size of a 1 terabyte hard drive isn't actually 1 TB, but in fact around 931 gigabytes.

Another unit of measurement we often meet are megabits. Due to the already mentioned confusion about byte size, as well as the fact that information is transfered via serial communication on most computer networks, the most reliable way of declaring network speed is in bits (and it's larger multiples) per second. For this reason we still express network (Internet included) speed in megabits per second.