One of the options for SynthesizeMissingValues is Method. And one of the suboptions for Method is "EvaluationStrategy", which determines how the system replaces a particular missing value once it has calculated a distribution of the possibilities. One of the values for "EvaluationStrategy" is "ModeFinding". From its name and from the logic of missing value synthesis, I would have thought that this value for the suboption causes the system to output the mode of whatever distribution it has found. And I therefore would have thought that the output would always be the same. But, it isn't. Here is the example provided in the documentation.
SynthesizeMissingValues[{{1, 2.2}, {2, 3.2}, {3, Missing[]}, {5,
6.2}, {6, 7}},
Method -> <|"LearningMethod" -> "Multinormal",
"EvaluationStrategy" -> "ModeFinding"|>]
If one runs this code multiple times, you get different answers each time. You can see this for yourself.
Table[SynthesizeMissingValues[{{1, 2.2}, {2, 3.2}, {3, Missing[]}, {5,
6.2}, {6, 7}},
Method -> <|"LearningMethod" -> "Multinormal",
"EvaluationStrategy" -> "ModeFinding"|>][[3, 2]], {10}]
To be sure, the answers are fairly similar to each other, but they are not identical. Is this because there is some randomness in how the LearningMethod develops the distribution? I don't suppose there's any way to force the result to be the same each time such as providing a RandomSeed to the Method.
Any help appreciated. Also, it would be nice if the documentation explained a little bit what these various LearningMethods were or provided a link to external references that described them.