# EvaluationStrategy->"ModeFinding" in SynthesizeMissingValues

Posted 8 months ago
911 Views
|
|
4 Total Likes
|
 One of the options for SynthesizeMissingValues is Method. And one of the suboptions for Method is "EvaluationStrategy", which determines how the system replaces a particular missing value once it has calculated a distribution of the possibilities. One of the values for "EvaluationStrategy" is "ModeFinding". From its name and from the logic of missing value synthesis, I would have thought that this value for the suboption causes the system to output the mode of whatever distribution it has found. And I therefore would have thought that the output would always be the same. But, it isn't. Here is the example provided in the documentation. SynthesizeMissingValues[{{1, 2.2}, {2, 3.2}, {3, Missing[]}, {5, 6.2}, {6, 7}}, Method -> <|"LearningMethod" -> "Multinormal", "EvaluationStrategy" -> "ModeFinding"|>] If one runs this code multiple times, you get different answers each time. You can see this for yourself. Table[SynthesizeMissingValues[{{1, 2.2}, {2, 3.2}, {3, Missing[]}, {5, 6.2}, {6, 7}}, Method -> <|"LearningMethod" -> "Multinormal", "EvaluationStrategy" -> "ModeFinding"|>][[3, 2]], {10}] To be sure, the answers are fairly similar to each other, but they are not identical. Is this because there is some randomness in how the LearningMethod develops the distribution? I don't suppose there's any way to force the result to be the same each time such as providing a RandomSeed to the Method.Any help appreciated. Also, it would be nice if the documentation explained a little bit what these various LearningMethods were or provided a link to external references that described them.
 Hi Seth,In your case the randomness comes from the distribution learning. (Sometimes, randomness can also come from the algorithm that searches for the mode, but not in this case). Because of the way the automation works (trying various methods, measuring their performance, timing them, etc.) there is never a guaranty that LearnDistribution (which is the function used underneath SynthesizeMissingValues) will be deterministic, nevertheless the option RandomSeeding can alleviates most of the randomness. You can try for example: In[271]:= SynthesizeMissingValues[{{1, 2.2}, {2, 3.2}, {3, Missing[]}, {5, 6.2}, {6, 7}}, Method -> <|"LearningMethod" -> "Multinormal", "EvaluationStrategy" -> "ModeFinding"|>, RandomSeeding -> 1234] Out[271]= {{1, 2.2}, {2, 3.2}, {3, 4.40728}, {5, 6.2}, {6, 7}} and it will almost always give the same result.For the learning methods, they are the methods of LearnDistribution, which are individually documented, see for example: Thanks, Etienne