Message Boards Message Boards

VectorPlot3D underperformance in Mathematica 11.2 vs 11.0.1 ?

GROUPS:

I attach here a notebook created by Mathematica 11.2. The machine I am running them is a Mac Pro 6.1, that is a 3.7GHz machine with 4 cores with a 500GB SSD drive and 64GB memory.

Mathematica 11.0.1 run time of the VectorPlot3D is 1.10571 second. Mathematica 11.2 run time of the VectorPlot3D is 1247.3 second.

The BenchMark test is about the same for both version. I want my money back. If that is not possible, Wolfram please send me about 10 Nvidia DGX so with 11.2 I can be in par with 11.0.1.
Thanks ahead, János

POSTED BY: Janos Lobb
Answer
4 months ago

Have you tried comparing the formulas for nqvrbmmf in the two versions? Are they the same formula?

POSTED BY: Gianluca Gorni
Answer
4 months ago

I ran the same attached notebook with both Mathematica, so the formulas should be the same if internal compatibility is reserved.

POSTED BY: Janos Lobb
Answer
4 months ago

I have run it on 11.2 and OSX 10.13.3. Runtime was 1424s. On 11.0.1 and OSX10.13.3 the runtime was 1.025s.

Best wishes,

Marco

POSTED BY: Marco Thiel
Answer
4 months ago

Thanks Marco. Then it is not just my nightmare. I am wondering if I should sent it to Support too.

POSTED BY: Janos Lobb
Answer
4 months ago

Dear Janos,

I have already submitted that with an additional but related issue.

Best wishes, Marco

POSTED BY: Marco Thiel
Answer
4 months ago

Just to be sure I submitted too. Thanks a lot. János

POSTED BY: Janos Lobb
Answer
4 months ago

Thanks to those who brought this to our attention. In investigating the slowdown in recent versions of Mathematica, it looks like the slow evaluation is a result of VectorPlot3D needing to evaluate the nqvrbmmf[mu0, a, b, c, x, y, z] function at all the points that the plot samples. You can see that individual calls of nqvrbmmf are relatively slow:

In[5]:= Do[
  nqvrbmmf[1000000, 0.01`, 0.01`, 0.01`, x, y, 
    z] /. {x -> RandomReal[{-0.015`, 0.059`}], 
    y -> RandomReal[{-0.02`, 0.02`}], 
    z -> RandomReal[{-0.035`, 0.115`}]}, 200] // AbsoluteTiming

Out[5]= {19.7192, Null}

This slow evaluation time can be addressed by forcing evaluation of VectorPlot3D's first argument with Evaluate prior to the actual evaluation of the plot:

In[6]:= AbsoluteTiming[
 VectorPlot3D[
     Evaluate[nqvrbmmf[mu0, a, b, c, x, y, z]] , {x, -1.5*a, 
      5.5*a + 4*gap}, {y, -2*b, 2*b}, {z, -3.5*c, 1.5*c + R}, 
     VectorPoints -> {40, 20, 30}, BoxRatios -> Automatic, 
     Axes -> True, PlotLegends -> Automatic, 
     PlotRange -> {{-0.015, 0.059}, {-0.02, 0.02}, {-0.035, 0.115}}] //
     ReplaceAll[#, {mu0 -> 10^6, a -> 0.01, b -> 0.01, c -> 0.01, 
       gap -> 0.001, R -> 0.1}] & // Quiet;]

Out[6]= {1.43917, Null}

The problem here was actually in Mathematica 11.0.1, where it looks like VectorPlot3D's first argument was evaluating before it should, leading to the speedup and causing the confusion.

POSTED BY: Kyle Martin
Answer
4 months ago

Thank you Kyle. I knew about the warning in the VectorPlot3D documentation, but looks like I did not take it seriously, because the plotting just worked in 11.0.1 and that made me lazy. Thanks again, János

POSTED BY: Janos Lobb
Answer
4 months ago

Having to wrap Evaluate[] around a function being plotted come up often enough that it might be helpful if the documentation spent some time discussing the best practices around this programming metaphor.

In this case, it is clear that 11.0.1 is the version with an issue. However, there are some experienced Mathematica users who missed the fact that wrapping the function with Evaluate[] when plotting would essentially replicate what 11.0.1 was doing, as far as speed of execution.

For routine plots (2-D and 3-D) it makes no difference, but for anything computation intense, it would be very useful to have some more explicit guidance in the documentation.

POSTED BY: George Woodrow III
Answer
4 months ago

I would have thought that with immediate definitions (=) instead of delayed definition (:=) for the quantities rbmmsp, rbmmf would be equivalent to Evaluate, because the very complicated formulas are pre-calculated, before the insertion of numeric values. Surprisingly to me, there is no speed-up.

The lesson here seems to be like this: if we are plotting a function with a very lengthy algebraic formula, we had better feed it to Plot etc in expanded form, either with Evaluate or With (or, conceptually, copy-paste of the formula itself).

POSTED BY: Gianluca Gorni
Answer
4 months ago

One more tidbits to it. When I tried to plot it without Evaluate, it grabbed 20GB as "Cached Files". So, it was doing some heavy duty work on my SSD - although still there was plenty of real memory available -, but I realized it only after the facts. I do not know if it is still one of the side effects of the Frontend is being 32-bit. There were times a few years ago when I pushed the limits of this machine into 1% idle and constant swapping in and out of memory, virtual memory were going into hundreds of GBs, but even at those times the "Cached Files" never grew up to 20GB, see picture attached.

enter image description here

POSTED BY: Janos Lobb
Answer
4 months ago

Do you think it is reasonable that this computation should take that much memory? Seems very inefficient, especially since there are not an excessive number of vectors being computed.

POSTED BY: George Woodrow III
Answer
3 months ago

I really do not know if that was reasonable or not. The Wolframkernel is 64-bit, so it can grab quite a few bits of memory if it wants to. I definitely did not think that I had as much data to compute. However, because I mostly compute symbolically as long as I can, and turn to numerical approaches only as the last resort, so It is possible that very long expressions could have built up in the kernel memory.

POSTED BY: Janos Lobb
Answer
3 months ago

Group Abstract Group Abstract