HomeAbout

Bits, Bytes, Hexadecimal

What is a bit

Bit is one binary digit which is the smallest unit of information on a machine.

0 or 1 is one bit.

Distinct Values

Number of distinct values that can be represented by a n-bit is calculated by 2^(n-bit)

0 bit does not exist, but represents distinct value `0` (2^0 = 1)
1 bit | 2^1 = 2 | 2 distinct values (0, 1)
2 bit | 2^2 = 4 | 4 distinct values (00, 01, 10, 11)
4 bit | 2^4 = 16 | 16 distinct values
8 bit | 2^8 = 256 | 256 distinct values (1 byte)
16 bit | 2^16 = 65,536 | 65,536 distinct values (2 bytes)

Max Value

Max value a bit can represent is 2^n-1 because n=1 bit is reserved for 0 representation

What is n-bit

When we see common terms 8-bit, 16-bit, 32-bit and 64-bit all refer to a processor's word size

word is a native size of information processor can place into a register and process without special instructions

  • also refers to the size of the memory address space

The word size of any chip is the most defining aspect of it's design

What is a word size

Word size refers to the amount of data a CPU's internal data registers can hold and process at one time

  • size of the internal functional units in the CPU itself

The difference in word size has a dramatic impact on the capabilities and performance of a given chip

  • Once you get up to 32-bits, the differences mainly become those of refinement (unless you are running a really big application, like genetic analysis or counting all the stars in the galaxy big)

What is Bit-Depth

Bit-depth is determined by the number of bits used to define each pixel

  • The greater the bit depth, the greater the number of tones (grayscale or color) that can be represented on the screen

Common bit Sizes

4-bit = 1-nibble (one hexadecimal)

Can represent 16 (2^4) different values.

Decimal representation from 0 to 15.

  • 16-1 since "distinct values" includes 0.

Written in binary as 0000 (0) to 1111 (15)

  • decimal value: 15

Represents one Hexadecimal character

8-bit (1-byte)

Can represent 256 (2^8) distinct values

Decimal representation from 0 to 255

  • 256-1 since "distinct values" includes 0

Written in binary as 0000 0000 (0) to 1111 1111 (decimal value: 255)

  • 128 (2^7) + 64 (2^6) + 32 (2^5) + 16 (2^4) + 8 (2^3) + 4 (2^2) + 2 (2^1) + 1 (2^0) = 256
  • those are 8 bits or binary digits

16-bit (2-bytes)

Can represent 65,536 distinct values.

Can represent decimal value from 0 to 65,535

  • 2^16 - 1 = 65,536 - 1 = 65,535

32-bit (4-bytes)

Can represent decimal value from 0 to 4,294,967,295

  • 2^32 - 1 = 4,294,967,296 - 1 = 4,294,967,295

64-bit (8 bytes)

Signed 64-bit

Max size: 19-digit number

Bit of highest significance is reserved for the sign

  • If this bit is 1, the number is negative

Because the last digit can represent 0 and first digit always represents signedness, it is possible to have both -0 and +0 in decimal representation

  • In ordinary arithmetic, this doesn't matter. But some particular operations could have different behaviors.

Which means the decimal representation can go as low as -2^63 and as high as 2^63 - 1:

  • -9,223,372,036,854,775,807
    • (sign reserved for first digit)
  • 9,223,372,036,854,775,807
    • (1 reserved for last digit)

Unsigned 64-bit

Max size: 20-digit number

  • can go as low as 0 and as high as 2^64 - 1.
  • unsigned integers cannot present negative values
    • technicality it is possible to represent the numeric aspect of a quantity with uint64 and represent it's positive/negative quality in a separate bool

Number from 0 to 18,446,744,073,709,551,615

Why subtract the one -1

Since the computer has to store the number 0 in an unsigned int, it is actually starting to count with 0, then 1 and so on

  • That means that the n-th number for the computer is, in fact, n-1

byte

Unit of digital information consisting of 8 bits.

Why byte instead of bits?

Hardware-level memory is naturally organized into addressable chunks

  • Small chunks means that you can have fine grained things like 4 bit numbers
  • Large chunks allow for more efficient operation (typically a CPU moves things around in chunks or multiple chunks)

Large Bytes

kilobyte (Kb), megabyte (Mb), gigabyte (Gb), and terabyte (Tb) are calculated two to the power of n * 10

Kilobyte = 2^10 = 1,024 bytes
Megabyte = 2^20 = 1,048,576 bytes
Gigabyte = 2^30 = 1,073,741,824 bytes
Terabyte = 2^40 = 1,099,511,627,776 bytes

hexadecimal

Hexadecimal representation consists of 16 base values:

0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F

Since 16 equals 2^4 (4-bits), one hexadecimal also represents 4-bits.

Since byte is 8-bits, you can represent a byte with two hexadecimal digits (4-bits x 2) (base of 16 per digit):

00 01 3D 00 40 28 E6 66

This is ranging between:

  • 0 (0000 binary, 0 decimal)
  • F (1111 binary, 15 decimal):

This makes 7f the last value of ASCII (127 bit) in hexadecimal.

ff (maximum two digit hexadecimal) is 255 in decimal and 11111111 in binary.

Why does it exist

Hexadecimal is more "human readable" than binary number.

AboutContact