Cher Yves,
I took at look at your notebook.
I am uncertain what you are asking, but I think the question is why the built-in function NMinimize is not parallelized. Is that correct?
I agree is would be nice if some of the numerically intensive functions in Mathematica were designed to take an option such as Parallelize->True, but it doesn't exist (it is one of the suggestions here: http://community.wolfram.com/groups/-/m/t/181759?ppauth=bCSNFi3t).
However, in your example you have this construct:
es0 = NMinimize[{Total[
ParallelTable[(mmoyen0[datamhtcor[[j, 1]], datamhtcor[[j, 2]],
datamhtcor[[j, 3]], dist, ee, gx, gy, gz] -
datamhtcor[[j, 4]])^2, {j, 1, Length[datamhtcor]}]],
gx > 2. && gy > 2. && gz > 2 && ee < Abs[dist/3]}, {{gx, 2,
2.2}, {gy, 2.1, 2.2}, {gz, 2.1,
2.2}, {dist, -5, -2}, {ee, .3, .5}} , Method -> "NelderMead",
MaxIterations -> 100]
If NMinimize were parallelized, then this isn't likely to work because you would be wrapping on parallelization inside another. You would be better off precomputing (e.g,)
minimizationTarget =Total[ParallelTable[(mmoyen0[datamhtcor[[j, 1]], datamhtcor[[j, 2]],
datamhtcor[[j, 3]], dist, ee, gx, gy, gz] -
datamhtcor[[j, 4]])^2, {j, 1, Length[datamhtcor]}]]
and then minimizing that in a subsequent step:
NMinimize[{minimizationTarget,, gx > 2. && gy > 2. && gz > 2 && ee < Abs[dist/3]}, {{gx, 2,
2.2}, {gy, 2.1, 2.2}, {gz, 2.1,
2.2}, {dist, -5, -2}, {ee, .3, .5}} , Method -> "NelderMead",
MaxIterations -> 100]
One last observation, instead of doing this:
(*test ù ParallelTable est utilisé pour les 60 points expériementaux *)
t1 = AbsoluteTime[];
Do[Total[ParallelTable[(mmoyen0[datamhtcor[[j, 1]],
datamhtcor[[j, 2]], datamhtcor[[j, 3]], -10, 0., 2, 2, 2] -
datamhtcor[[j, 4]])^2, {j, 1, Length[datamhtcor]}]], {i, 1,
30}]
t2 = AbsoluteTime[] - t1
There is a handy function Timing[] which you can wrap around the Do.