Floating-point analysis
$\def\fl{\mathop{\text{fl}}}$
For roundoff error in double precision:
Let
$\fl(u)$ represent the correctly rounded floating-point number closest to the real number
$u$.
Let
$x,y,z$ be floating-point numbers between
$0$ and
$1$ (so that
$x = \fl(x)$ etc.).
Let
$x+y+z$ be true sum and let
$\fl(x+y+z)$ the computed sum.
If
$\fl(x,+y+z)=1$ (compared with or without tolerance), then
$$x+y+z=1+\delta$$
where
$\delta$ is some small roundoff error bounded by
$${-\tau_{-} \le \delta \le \tau_{+}}$$
with the tolerances
$\tau_{-} \ge \varepsilon/4$,
$\tau_{+} \ge \varepsilon/2$ and
$\varepsilon = 2^{-52}$ (machine epsilon).
Now
$2u$, is computed without roundoff error,
as are
$2u-1$ and
$1-2u$, if
$0.5 \le u < 1.0$.
It is not hard to see that
$x'+y'+z'$ will be either
$1+2\delta$ or
$1-2\delta$ (3rd or 4th rule resp.),
if
$(x',y',z')$ is the iterate following
$(x,y,z)$.
If
$\delta \ne 0$, then eventually the iteration stops with "Total Error", unless a "Half Total" incidentally occurs earlier.
If
$\delta=0$, then the following determines the behavior.
If
$0.5 \le u < 1$ and the trailing
$n$ bits of the mantissa of
$u$ are zero,
then the trailing
$n+1$ bits of the mantissa of
$2u-1$ and
$1-2u$ are zero.
After at most 52 such transformations,
$u = 0.5$ and you get a "Half Total".
Since each of
$x$,
$y$, and
$z$ would experience 52 such transformations
if the iteration never stopped, a "Half Total" is inevitable.
Example: Case
$\delta \ne 0$
seq = numsNest[{0.84474, 0.117199} // Append[#, 1 - Total[#]] &, 1.]
(*
{{0.84474, 0.117199, 0.038061}, {0.68948, 0.234398,
0.076122}, {0.37896, 0.468796, 0.152244}, {0.24208, 0.062408,
0.695512}, {0.48416, 0.124816, 0.391024}, {0.03168, 0.750368,
0.217952}, {0.06336, 0.500736, 0.435904}, {0.12672, 0.001472,
0.871808}, {0.25344, 0.002944,
0.743616}, {{0.25344, 0.002944, 0.743616}, "Error Total"}}
*)
The error in the total doubles at each step:
Total /@ Most[seq] - 1
Ratios[%]
(*
{0., -1.11022*10^-16, -2.22045*10^-16, 4.44089*10^-16,
8.88178*10^-16, -1.77636*10^-15, -3.55271*10^-15, -7.10543*10^-15,
-1.42109*10^-14}
{ComplexInfinity, 2.`, -2.`, 2.`, -2.`, 2.`, 2.`, 2.`}
*)
The initial zero is because
$\fl(x+y+z)=1$ exactly, but
$(x+y+z)-1 = -2^{-54}$:
SetPrecision[{0.84474, 0.117199} // Append[#, 1 - Total[#]] &, Infinity] //
Total // N[# - 1] &
(* -5.55112*10^-17 *)
The error after the first step is twice this, namely
$-2^{53} = -1.11\times10^{-16}$.
Example: Case
$\delta = 0$.
Remove the roundoff from an initial value less than 0.5 to get a total that equals 1:
SetPrecision[{0.84474, 0.117199 + 2^-54} //
Append[#, 1 - Total[#]] &, Infinity] //
Total // N[# - 1] &
(* 0. *)
seq2 = numsNest[{0.84474, 0.117199 + 2^-54} // Append[#, 1 - Total[#]] &, 1.]
(*
{{0.84474, 0.117199, 0.038061}, {0.68948, 0.234398, 0.076122},
...
{0.0136719, 0.244141, 0.742188}, {0.0273438, 0.488281, 0.484375},
{0.945313, 0.0234375, 0.03125}, {0.890625, 0.046875, 0.0625},
{0.78125, 0.09375, 0.125}, {0.5625, 0.1875, 0.25}, {0.125, 0.375, 0.5},
{{0.125, 0.375, 0.5}, "Half Total"}}
*)
A visualization of the effective loss of precision:
MatrixPlot@RealDigits[#, 2][[All, 1]] & /@
Transpose@seq2[[;; -2]] // GraphicsRow
