Floating point representations are not true repsentation of real numbers.
// 0.1 + 0.2 != 0.3 console.log(0.1 + 0.2); // 0.30000000000000004
When you write 0.3
in JS, it is stored as:
0.299999999999999988897769753748434595763683319091796875
Which is rounded/trimmed as 0.3
when printed.
When you add 0.1
and 0.2
, here is what happens:
0.1 // 0.1000000000000000055511151231257827021181583404541015625 0.2 // 0.200000000000000011102230246251565404236316680908203125 0.1 + 0.2 // 0.3000000000000000444089209850062616169452667236328125
The "error" caused by adding the two floating point representation of 0.1
and 0.2
pushed the result above the error threshold of 0.3
.
Think of it like a currency conversion:
With integer inputs, checking for null
or zero
value faster.
// commonly used approach if(x != 0) { // ... } // faster approach using omit operation if(x) { // 0 is False // other is True }
Coercing non-zero falsy values to zero with OR.
This works because the OR
operator returns the value of the second operand when the first operand is falsy.
integerExpectation = nonZeroFalsyValue || 0;