We've heard about bits, and their definition as the smallest possible pieces of information. The bit rate is, in essence, the speed at which information is carried through a network. Or, to rephrase it: bit rate is the number of bits that are transfered over a unit of time.
To illustrate: we've all probably seen (either in documentaries, movies or even cartoons) a telegraph key device. Now, imagine two key devices connected by a line. The two people operating the key devices are using Morse code to send messages to one another. Since the Morse code uses two kinds of pulses: a dash and a dot, it corresponds to a binary system, and every dash or dot can be seen as 1 bit of data. Let's say person A must send an SOS signal to person B. In binary, the SOS signal is 000111000 (note that it is actually more complex than this, but for the sake of illustration we will simplify it for this example). There is a total number of 9 bits that need to be transfered. If it takes one second for person A to send this message, this corresponds to bit rate of 9 bits per second. Now to put it into context using today's number. If we wanted to send a 1 MB file using this manner of communication it would take us roughly 260 hours, or 11 days do it. Downloading a 1 GB movie at this rate would take about 30 years. Let's hope The Hobbit is still a classic in the year 2045.
Of course, this all implies a human element, which significantly slows things down. Luckily, technology has evolved past this, and we are now looking at network speeds of over 10 Mb/s. This means our person A would have to click his telegraph key 10 million times in a second to match it.
It it also important to note that in the illustration above, the user gets the maximum use of his bit rate, which isn't true in the real world. If we are downloading a 1 GB file from the Internet at the speed of 10 Mb/s, it will actually take us more than the 13 minutes that we would get by simple math. This is because every data packet sent over a network contains more data than the pure bits that we want. This data includes the source and destination addresses, error detection codes, and other information that ensures the highest possible quality of service for the user.
Calculation and conversion of bit rate is relatively straightforward. Due to high speeds we have today, the standard unit of bit rate measurement are megabits per second (Mb/s or Mbps), where one megabit equals one million bits. By extension, this means we can simply convert megabits to the the larger megabytes (MB) by dividing them by 8. This allows us to express 10 Mb/s as 1,25 MB/s (note the symbol for megabyte is „B“, while the bit symbol is „b“). However, network speeds are still expressed in bits (and their larger multiples) per second due to historic reasons and the fact that data is mostly sent in a serial transfer over the network, one bit after another.
Bit rate can also be also be used to describe the quality of audio and video data in a similar way as resolution would affect the quality of an image, in that a 1920x1800 image would normally appear more clear and detailed than a 1200*1080 (on a screen of the same size of course) due to a higher pixel density. In this case, the audio or video bit rate tells us how many bits of data are used to present the content to the user every second. Usually, the higher the bit rate of audio or video, the higher the quality. But this depends on the way the files are compressed. If we have a video file with a resolution of 1920x1080 and 24 frames per second, this means that every second of the video file would be about 144 MB large, and this is without any sound.
This is where codecs come into play. In essence, a codec is a program that uses a specific algorithm to compress the audio or video file, while maintaining high quality of audio and video. Codecs compress the files in such a way that only that same codec can decompress them properly, so in order to play some kinds of audio and video files, we need to have a codec pack installed that contains that specific codec.
In this article we've presented and explained the bit rate; it's use and significance, as well as ways to calculate and convert it from and to different units of measurement. We've also shown how modern technology uses different tools and algorithms to „fool“ the user into by compressing content, reducing it's actual bit rate, but maintaining high fidelity and quality of content. This is of course done, not to actually fool the user, but to make the content more easily accessible, as we are always limited by either the size of portable media, such as DVDs and Blu-Rays, or the network speeds, Internet included.