bit
Bit
is one binary digit
which is the smallest unit of information on a machine.
0
or 1
is one bit
.
Number of distinct values that can be represented by a n-bit
is calculated by 2^(n-bit)
0 bit does not exist, but represents distinct value `0` (2^0 = 1)
1 bit | 2^1 = 2 | 2 distinct values (0, 1)
2 bit | 2^2 = 4 | 4 distinct values (00, 01, 10, 11)
4 bit | 2^4 = 16 | 16 distinct values
8 bit | 2^8 = 256 | 256 distinct values (1 byte)
16 bit | 2^16 = 65,536 | 65,536 distinct values (2 bytes)
Max value a bit can represent is 2^n-1
because n=1
bit is reserved for 0
representation
n-bit
When we see common terms 8-bit, 16-bit, 32-bit and 64-bit all refer to a processor's word size
word
is a native size
of information processor can place into a register
and process without special instructions
memory address space
The word size
of any chip is the most defining aspect of it's design
word size
Word size
refers to the amount of data a CPU's internal data registers
can hold and process at one time
The difference in word size
has a dramatic impact on the capabilities and performance of a given chip
Bit-Depth
Bit-depth
is determined by the number of bits
used to define each pixel
grayscale
or color
) that can be represented on the screenbit
Sizes4-bit
= 1-nibble
(one hexadecimal
)Can represent 16
(2^4
) different values.
Decimal representation from 0
to 15
.
16-1
since "distinct values" includes 0
Written in binary as 0000
(0) to 1111
(15)
15
Represents one Hexadecimal character
8-bit
(1-byte
)Can represent 256
(2^8
) distinct values
Decimal representation from 0
to 255
256-1
since "distinct values" includes 0
Written in binary as 0000 0000
(0) to 1111 1111
(decimal value: 255
)
2^7
) + 64 (2^6
) + 32 (2^5
) + 16 (2^4
) + 8 (2^3
) + 4 (2^2
) + 2 (2^1
) + 1 (2^0
) = 256
bits
or binary digits
16-bit
(2-bytes
)Can represent 65,536
distinct values.
Can represent decimal value from 0
to 65,535
2^16 - 1
= 65,536 - 1
= 65,535
32-bit
(4-bytes
)Can represent decimal value from 0
to 4,294,967,295
2^32 - 1
= 4,294,967,296 - 1
= 4,294,967,295
64-bit
(8 bytes
)Signed 64-bit
Max size: 19-digit
number
Bit of highest significance
is reserved for the sign
1
, the number is negative
Because the last digit
can represent 0
and first digit
always represents signedness
, it is possible to have both -0
and +0
in decimal representation
Which means the decimal representation can go as low as -2^63
and as high as 2^63 - 1
:
-9,223,372,036,854,775,807
sign
reserved for first digit
)9,223,372,036,854,775,807
1
reserved for last digit
)Unsigned 64-bit
Max size: 20-digit
number
0
and as high as 2^64 - 1
uint64
and represent it's positive/negative quality in a separate bool
Number from 0
to 18,446,744,073,709,551,615
-1
Since the computer has to store the number 0
in an unsigned int, it is actually starting to count with 0
, then 1
and so on
n-th
number for the computer is, in fact, n-1
byte
Unit of digital information consisting of 8 bits
byte
instead of bits
?Hardware-level memory is naturally organized into addressable chunks
kilobyte (Kb)
, megabyte (Mb)
, gigabyte (Gb)
, and terabyte (Tb)
are calculated two
to the power of n * 10
Kilobyte = 2^10 = 1,024 bytes
Megabyte = 2^20 = 1,048,576 bytes
Gigabyte = 2^30 = 1,073,741,824 bytes
Terabyte = 2^40 = 1,099,511,627,776 bytes
hexadecimal
representationHexadecimal
representation consists of 16
base values:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F
Since 16
equals 2^4
(4-bits
), one hexadecimal
also represents 4-bits
.
Since byte
is 8-bits
, you can represent byte
with two hexadecimal digits (4-bits x 2)
:
00 01 3D 00 40 28 E6 66
This is ranging between:
0
(0000
binary, 0
decimal)F
(1111
binary, 15
decimal):Hexadecimal
is more "human readable" than binary number
.