Probably you should check some of the standard references on numerical analysis. Also, I've seen nothing thus far to indicate that the NSolve
results are inconsistent with what one would expect (by "one", I mean "one with at least a passing familiarity with numerical approximation and truncation/roundoff error").
For a very quick example of what I refer to, let's find a root of a simple quadratic polynomial.
quadpoly = x^2 - 7;
rt = FindRoot[quadpoly == 0, {x, 3}]
(* Out[209]= {x -> 2.64575131106} *)
Now let's check that the residual is zero, that is to say, plugging the root into the polynomial actually makes it vanish.
In[210]:= quadpoly /. rt
(* Out[210]= 8.881784197*10^-16 *)
That's not an exact zero. Does that mean the root is somehow wrong?
There is another serious issue with your example. It is one of scaling. You have machine numbers and also numbers with exponents too large in magnitude to be represented as machine numbers. And furthermore the variation in the scales is far larger than the ULP of a machine number (look that up if it is not a familiar term). This is a situation where using either exact or high precision input is fairly important. Without doing that, basically you will be dealing with massive truncation error, to the point where you won't be able to tell if the results obtained are at all valid.
It would be a good idea to consider carefully what it is you are trying to solve, and what will be required in order to do so. Then consider what it is that would be desirable properties of a solution (absolute error, relative error, some feature of residuals, ...). It makes little sense to have a problem that is not carefully set up, obtain solutions that seem to give good (as in small) residuals, and then state that they are no good; you have to begin with a set up for which sensible results might be derived. And you have to understand what "sensible" might mean, in the context of numerical approximation.