I'm looking up a book about CUDA.
On the chapter which explains the floating points of CUDA, I found something odd.
The book says that (1.00 * 1) + (1.00 * 1) + (1.00 * 0.01) + (1.00* 0.01) = 10. All the numbers are binaries. 0.01 refers to decimal 0.25.
So, in decimal serially adding 1 + 1 + 0.25 + 0.25 results in 2.
The book says why this happens ; after doing 1+1, it will ignore +0.25 since it's too small compared to the other operand(the result of 1+1, 2).
After this, they say that doing 0.25 + 0.25 + 1 + 1 will produce 2.5, since 0.5 is considered enough to be added with 1.
What is the meaning of this? How could the processor judge that 0.25 is too small compared to 2? Are there obvious standards for this?