I've experimented a curious behavior with divisions in ruby.
some_nbr / 0 # ZeroDivisionError with 0 (int)
some_nbr / 0.0 # NaN with 0.0 (float)
Of course, division by 0 is bad, but I'd like to figure out why doing a division by zero with an
int results in an exception whereas doing the same with a float will just returns
Best How To :
0 is exactly zero; there is no error. And since division by zero is mathematically undefined, it makes sense that integer division by
0 raises an error.
On the other hand, float
0.0 is not necessarily representing exactly zero. It might originate from a number whose absolute value is small enough to be rounded to zero. In such case, mathematical division is still defined. It rather does not make sense to suddenly raise an error when the divisor's absolute value is small. However, in such case, a meaningful value cannot be reproduced due to the value being rounded, so the best it can be done to make sense is to return some sort of a pseudo-numeral, such as