Message Boards Message Boards

1
|
7237 Views
|
9 Replies
|
5 Total Likes
View groups...
Share
Share this post:

Simple computation runs out of memory

Posted 2 years ago

Moderator WARNING: the code below may cause issues with computer memory or its other systems. Please run it only after careful consideration and at your own risk.


Using Mathematica cloud student license

getnums[x_]:=Apply[Plus,Tuples[{0,1,2,3},x],1];
getnumslist[x_]:=Flatten[Table[Apply[Plus,Tuples[{0,1,2,3},x[[i]]],1],{i,Length[x]}]];
f=Nest[getnumslist,{1},3]
Length[f]
Tally[f]
Histogram[f]

runs ok

getnums[x_]:=Apply[Plus,Tuples[{0,1,2,3},x],1];
getnumslist[x_]:=Flatten[Table[Apply[Plus,Tuples[{0,1,2,3},x[[i]]],1],{i,Length[x]}]];
f=Nest[getnumslist,{1},4]
Length[f]
Tally[f]
Histogram[f]

gives error

This computation has exceeded the memory limit for your plan.

$Aborted

My estimate is that this takes less than 100K of RAM. I use the software for less than 20 hours a month. My total computation time used per month is estimated less than 200 seconds in the cloud. This is a huge disappointment. I don't see the value in this software for such simple computations that it simply can not do. I ran a similar program in python with much larger data sets being generated in less than 1 second on a 6-core CPU locally, again probably less than 100K of RAM although I have not measured it exactly. Python is free. R is free. GNU Octave is free. Sage is free. Julia is free. There needs to be some better capability for a paid software to compete with free software that can already do these simple computations.

Can Wolfram support fix the RAM limit? This must be an honest mistake.

POSTED BY: R Smith
9 Replies

As for refactoring, that in and of itself might be a challenge. One can avoid the memory grab but at the expense of a long iteration. I'd instead recommend changing the algorithm, so as to use powers of (1+t+t^2+t^3). Instead of repeating computations I would just keep the tally directly in the result.

getnumslist3[ll_] := Module[
  {t, sslist, pairs},
  sslist = ll[[All, 1]];
  pairs = SplitBy[SortBy[Flatten[Table[
       With[
        {clist = 
          Reverse@CoefficientList[
            Apply[Plus, t^{0, 1, 2, 3}]^sslist[[i]], t]},
        MapIndexed[{#2[[1]] - 1, #1*ll[[i, 2]]} &, clist]],
       {i, Length[sslist]}], 1], First], First];
  Map[{#[[1, 1]], Total[#[[All, 2]]]} &, pairs]
  ]

Check that the result agrees with Tally on the original for three iterations.

getnumslist3[f1]

(* Out[174]= {{0, 85}, {1, 342}, {2, 1038}, {3, 2631}, {4, 5575}, {5, 
  10458}, {6, 17656}, {7, 27010}, {8, 37930}, {9, 49128}, {10, 
  58873}, {11, 65545}, {12, 67861}, {13, 65370}, {14, 58618}, {15, 
  48858}, {16, 37785}, {17, 27051}, {18, 17850}, {19, 10803}, {20, 
  5958}, {21, 2964}, {22, 1314}, {23, 510}, {24, 168}, {25, 45}, {26, 
  9}, {27, 1}} *)

Now go one level further.

f = Nest[getnumslist3, {{1, 1}}, 4]

(* Out[191]= {{0, 621436}, {1, 7471332}, {2, 52618932}, {3, 
  279236466}, {4, 1222541634}, {5, 4631585400}, {6, 15630467652}, {7, 
  47900237346}, {8, 135130759246}, {9, 354505884069}, {10, 
  871618690157}, {11, 2020850395673}, {12, 4440188064925}, {13, 
  9283201253072}, {14, 18530962357622}, {15, 35419759585918}, {16, 
  64983058637123}, {17, 114675839863123}, {18, 195006335705772}, {19, 
  320048871959497}, {20, 507661915598537}, {21, 779202693545018}, {22,
   1158531944606351}, {23, 1670155976457080}, {24, 
  2336467251666880}, {25, 3174214748838401}, {26, 
  4190539380488303}, {27, 5379108756793194}, {28, 
  6717029179946467}, {29, 8163250385673072}, {30, 
  9659073904440823}, {31, 11131120472464670}, {32, 
  12496733434607085}, {33, 13671357086181861}, {34, 
  14577019685433131}, {35, 15150763816905911}, {36, 
  15351776676473959}, {37, 15166114770549800}, {38, 
  14608274997888944}, {39, 13719370373306336}, {40, 
  12562221629904927}, {41, 11214162717002609}, {42, 
  9758682609222760}, {43, 8277131010514440}, {44, 
  6841595633218818}, {45, 5509756887730626}, {46, 
  4322119895369226}, {47, 3301604708579376}, {48, 
  2455125094483191}, {49, 1776559202824776}, {50, 
  1250432068840947}, {51, 855677138592297}, {52, 
  568984880918724}, {53, 367433373299856}, {54, 230282997223648}, {55,
   139971096690798}, {56, 82444594000239}, {57, 47016461764467}, {58, 
  25934404994040}, {59, 13821926222520}, {60, 7108868470206}, {61, 
  3523582015788}, {62, 1680591108936}, {63, 770004663372}, {64, 
  338253440475}, {65, 142154088348}, {66, 57011216124}, {67, 
  21756929787}, {68, 7874531643}, {69, 2692456380}, {70, 
  865698255}, {71, 260302752}, {72, 72705528}, {73, 18707715}, {74, 
  4388040}, {75, 925515}, {76, 172341}, {77, 27612}, {78, 3663}, {79, 
  378}, {80, 27}, {81, 1}} *)

This can be plotted using e.g. ListPlot or, if one prefers pictures like those from Histogram, BarChart[f[[All, 2]], BarSpacing -> 0].

POSTED BY: Daniel Lichtblau

The code that was posted will attempt a large memory grab. That's the sort of thing that might prompt one to think it's a troll. I for one would like to see the Python code that handles this without requiring considerable memory.

I will add that I do not know what is the memory limit for your license, but I'm pretty sure it well exceeds 5 M because the nesting to three levels already well exceeds that usage. As others have noted, the memory needed to nest one more level would be huge.

POSTED BY: Daniel Lichtblau
Posted 2 years ago

You could probably refactor the code to use an analytic solution rather than the enormous counting job your code attempts to do. Acknowledging that I'm no expert on performance and memory issues, my simple back of the envelope calculations seem to suggest that we're well beyond even peta-byte territory here. It's entirely possible that my mental model of what's happening here is wrong, so I welcome corrections to my calculations. You claimed a 100K memory consumption; maybe you can justify that claim.

Otherwise, since you already have a Python solution, it seems that your translation from Python to Mathematica introduced inefficiencies. Maybe you could show your python solution, or at least explain your algorithm, and someone might be able to suggest a better translation to Mathematica. It might be that your Mathematica code simply doesn't match your intention due to some simple oversight or misunderstanding. Maybe your real problem does require only 100K.

POSTED BY: Eric Rimbey
Posted 2 years ago

the average personal computer now has 16GB of RAM. no less than 4GB. usually not more than 32GB for most desktop users who are not doing heavy work loads, VM's, high performance compute etc... In the data center it would not be uncommon to see 1TB of RAM for an application server with one or two nodes. My windows 10 machine has allocated 500MB of RAM to view this web page and run the javascript.

Lets be generous and assume that the computation that was automatically aborted by the application server uses 50MB of RAM for less than 1 second. In the data center the Mathematica cloud back end libraries would be in RAM or on an NVME SSD or even some optane modules. Whatever REST api or websockets API being used for the client server socket is a separate process and a separate thread from the process that actually does Mathematica computation. The compute threads will spawn on submission and die after the computation is complete. the query and result may be cached for some period of time either locally on the node or globally at wolfram's data center. This thread is not trolling. I am asking if users that pay for Mathematica cloud are limited to 5MB of RAM. I can't actually see what the limit is. This user interface does not give an error message like for example, "computation exceeded 5MB of RAM for license=student". I think that would be more helpful. I can for example, break the computation up into parts that will run in the cloud on my current license and that does eventually consume these forbidden resources but with a much less efficient use of my time. using 1MB for a max 5 seconds repeated 10 times is equivalent to 50MB for 1 second. This post is not trolling. I think it is rather disappointing that paid service is so limited both on wolfram alpha and mathematica cloud products I have purchased for a total of $\text{\$72} $ . It is even more disappointing that if I ask a serious question that I think there is something that needs to be addressed by the company I get called a troll. Would it only be acceptable to ask for 1MB of RAM then? if you want 2MB of RAM for an application server you are renting then you get called a troll? Who is being reasonable here and who is trolling? If one node in the data center has 1TB of RAM then you could have half of that for libraries and 500000MB to service simultaneous 500000 Mathematica cloud student compute requests per second. If a user was using the web app for 100 hours per month with %1 of the time requesting compute jobs be executed in the cloud that would be 1 Hour of computation time per user per month. the theoretical maximum users per month assuming the load was at a constant rate would be 720 users per 1MB server thread. that is 360000000 users per month on a single 1TB of RAM node in a data center. If the load has some huge concentration at peak hours it could be much much less. maybe 1000000 users per node. at \$11 per user minus administration and marketing costs, this would still be \ $5,000,000 in profit to run a single node with a cap ex of \$30K and an op ex of \$500 per month.

If there is some part this that you do not agree with, please tell me why you think that and where you are getting your information from. Lets have a respectful discussion. If you think that I can refactor the code to use less RAM then please show me how you think I can do that. ram used reading the web

POSTED BY: R Smith
Posted 2 years ago

Ah. Hadn’t thought of that.

POSTED BY: Eric Rimbey

This code seems almost purposefully written to use as much memory as possible. It's trolling, surely?

POSTED BY: Gareth Russell
Posted 2 years ago

I think there must be some disconnect here. If I understand this correctly, getnumslist essentially maps getnums over the input and flattens everything out. The max value of Nest[getnumslist,{1},3] is 27, and that means that at the next step we will calculate, at some point, getnums[27]. That will generate 4^27 tuples (which is on the order of 10^16), each of which has 27 elements. And that's just for the max element of Nest[getnumslist,{1},3]. It will also compute getnums[26] several times, getnums[25] several times, etcetera. I'm not seeing how this comports at all with your 100K estimate.

POSTED BY: Updating Name
Posted 2 years ago

A mistake. Deleted.

POSTED BY: Hans Milton

I'm no expert in WL but please explain your estimate of the calculation taking 100K ram. When I ran your first example, the Length[f] was 621,436.

Of course, the values in this list can easily fit within a byte, so if your estimate of 100K ram is measured in 4-byte words, that brings the total ram down to an order of 150k words of ram to hold 621K bytes.

I'll leave it up to the experts to educate me on this matter, though.

POSTED BY: Lawrence Winkler
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract