
Originally posted by Starman:
1,761.69  1,757.20 = 4.4899999999998
This is actually a very common problem with floating point numbers represented in computers. Before I go into the dry drivel, the basic calculator has a similar problem, one just cannot see it. Try the following in both the basic and advanced calculators:
1761.69  1757.20 =
* 100 =
 449 =
In the advanced calculator (set to Sci8), I get 2.18278728e11. In the basic calculator I get: 9.0949470e13. The answer done by hand is 0. Why the difference between the two calculators? I'm not sure, I assume it's a matter of representation differences. Why the difference between the calculators and 0? See drivel below.
The Drivel:
One has to remember most modern computers use binary number internally not decimal.
10 = 2
110 = 6
1010 = 10
There are many ways to represent floating point numbers on computers. The most common is referred to as IEEE floats or doubles, for the ANSI/IEEE 754 standard. Floats are 32 bits long, doubles and 64. I know that the advanced calculator uses IEEE doubles internally, I do not know about the basic.
Ignoring the IEEE standard for a moment. How does one represent fractions in a binary manner.
0.1 = 1/2 or 0.5
0.01 = 1/4 or 0.25
0.001 = 1/8 or 0.125
But then what is a simple decimal fraction like 0.1?
0.0001100110011001100110011001100110011...
It is a repeating pattern. But that means no matter how many binary digits one uses, one cannot represent the number exactly. Now let us look at the numbers given in binary:
1761.69: 11011100001.1011000010100011110101110000101000111101011100001010
1757.20: 11011011101.0011001100110011001100110011001100110011001100110011
Subtracting I get:
100.0111110101110000101000111101011100001010001111010110
Converting that back to decimal, rounding to 16 significant digits I get 4.490000000000000.
Checking the on the standard, an IEEE double claims to store 53 significant binary digits, reducing the two number to only 53 binary digit, then subtracting I get:
100.01111101011100001010001111010111000010100
Converting this to decimal, rounding to 16 significant digits I get: 4.489999999999782. If one the rounded it to only 14 significant digits for display one gets the answer: 4.4899999999998.
Now IEEE doubles claim to provide 16 decimal digits of significance. In the subtraction, one looses 4 (dropping from the thousands place to the ones place), thus only 12 digits should be relied on. 4.4899999999998 is within 12 digits of 4.49.
Conclusions:
Now, is this a calculation bug in the advanced calculator? Not really, it did "perform as designed". It might be argued to be a display bug. The calculator probably should not display more than say 10 significant digits, the advanced calculator author apparently choose 14, the basic 8.
[This message has been edited by potter (edited 04142000).]


