It is not clear to me what is the statement or if there is a question here. Certainly if precision is 8, artificially raising it using SetPrecision
will not give "better" results other than perhaps to work around any limitations that the original low precision might impose (and I am not sure there are any in this case).
I can say a bit about the arbitrary precision documentation and numeric linear algebra in the Wolfram Language. Much of the latter is implemented using fixed precision arithmetic, with condition number estimates or other means used to lower precision/accuracy in the results. The error propagation model of significance arithmetic is simply too large for effective use in the context of numeric linear algebra.
Getting back to the original scenario, as best I understand it. There is a dimension parameter, m
, and a parametrized family of matrices mat[m]
. Precision of the entries is on the order of 8 digits in all cases. The matrices become more ill-conditioned as m
goes up. If that is a correct understanding, then the remark about eventual incorrectness is on target. In effect each matrix is actually a Cartesian product of intervals (each element represents an interval of a value +-10^(-8), that is). Any linear algebra computation will give an approximation to a result that applies at best to some members of the original interval product. If the conditioning is bad then this approximation might be quite bad, and indeed error might swamp approximation.