thank you. I'm just now beginning to see how esoteric this topic is typically considered, yet how clearly it was evident while immersed in learning R just after your Summer Boot Camp sessions.
I believe yet another example was provided in today's lecture when we were evaluating predictive probabilities for boolean operators (or something like that) with values of .99998 accompanied by mean error of 1.00058 (or something like that) note: it's been awhile since I've taken a stats class.
So... perhaps a more precise version of my question for Friday is: If we accept that these tiny values are, in fact, measurements of computational error with magnitude attributable to specifications of the system which had computed it; and we are willing to admit these values of something times 10 to the negative 50th power are meaningless aside from representations of computational processor error...
why not just round it zero? or perhaps some symbolic feature of irrelevance? continued use seems to create user confusion from inconsistent outputs and higher risk of error propagation than benefit.
looking through the documentation I saw some functions which appear to effectuate this under the heading of Precision & Accuracy Control, but why not make such settings default? or default for a particular user type? (like "student/learner") then everyone in a class would receive similar results regardless of what type of computer they are using.