SetPrecision
takes all numbers and mathematical constants it sees and assigns them their numerical equivalents at the requested precision. Then the expressions containing them are evaluated, and error is probagated (assuming one did not use MachinePrecision
as the second argument). N
by contrast attempts to evaluate the entire expression to the requested precision (same caveat). Here is an example.
ee = Sin[10^5];
prec = 4;
{InputForm@N[ee, prec], InputForm@SetPrecision[ee, prec]}
(* Out[829]= {InputForm[0.0357487979720165093`4.], InputForm[0``0.0]} *)
Notice that all precision was lost in the second case, where SetPrecision
was used.
There are situations where the attained precision on application of N
to a numeric expression might not be correct. The cause I am aware of is in evaluation of special functions, in cases where the error estimates might be off. I believe this is a rare situation though.
Using N
in a symbolic computation is a different fish. For example, code in definite integration needs to account for presence of singular points that might only be approximately known. Code in functions such as Together
(used by Integrate
) might use Rationalize
or take other steps just to avoid crashing; such code was not designed for handling approximate numbers (and indeed, the literature on this is far from complete).
With all this in mind, your approach seems reasonable, but I cannot offer any guarantee. Moreover I am fairly sure it would require careful numerical analysis of the inputs and results in order to provide such guarantee.