Message Boards Message Boards

Setting global precision and decimals

Posted 10 years ago

(I have posted this also on StackExchange, but am posting it here just in case there are different people in these groups)

I am quite a newbie and it is taking me so much time to understand what I need to do to achieve something really simple and I still haven't found it.

What I want I think is simple. I run a dynamic model of a metabolic pathway in Mathematica and I calculate the values of several variables at steady-state for multiple conditions. In my model I have parameters that are constant. These have low precision just because it would cause quite some effort to put a bunch of zeros after each one (and it also makes the code look unappealing). (There must be a way I think to tell Mathematica to consider these numbers as high precision numbers, right?)

So for all the different conditions I calculate the variable values at steady-state. And I use ParallelTable quite often and I also use FindRoot and NDSolve.

An example of a relevant piece of script is:

 tsol0 = ParallelTable[ NDSolve[Join[ Odes /. RateEqs /. CoAMATX /. ReplacePart[Parm, Position[Parm, C6AcylCarMAT][[1, 1]] -> C6AcylCarMAT -> k[[j]]] /. Parm2, InitialConditions], Vars, {t, 0, 1000000}], {j, 1, Length[k]}];

ssp0 = Table[ FindRoot[ Table[Odes[[i, 2]] == 0, {i, 1, Length[Odes]}] /. RateEqs /. CoAMATX /. ReplacePart[Parm, Position[Parm, C6AcylCarMAT][[1, 1]] -> C6AcylCarMAT -> k[[j]]] /. Parm2, ParallelTable[{Vars[[i]][t], (Vars[[i]][900000] /. tsol0[[j]])[[ 1]]}, {i, 1, Length[Vars]}]], {j, 1, Length[k]}];

Parm and Parm2 contain lists of constant parameters. Odes, RateEqs and CoAMATX are functions and equations that together define how all the variables should change in time. With tsol I calculate the values for all the variables for each time point. I do this for k conditions, in each of these conditions the constant parameter C6AcylCarMAT has different value, because I want to calculate the variable values in time for an increasing C6AcylCarMAT value. With ssp0 I want to find the steady-state value for each of the conditions by indicating that probably at around t=900000 mins the variable values should not change anymore. As a result, ssp0 is a table containing the steady-state values for all the (metabolite concentration-related) variables for each condition. (Each row is one specific condition and each column is one specific metabolite). The other steady-state variables I can calculate from the steady-state metabolite concentrations and Odes, RateEqs and CoAMATX.

Anyway long story. This all works, however I get precision-related warnings. Furthermore, my output values do not have the same amount of decimals. This is the warning/error I get below my ssp1 code e.g.:

FindRoot::lstol: The line search decreased the step size to within tolerance specified by AccuracyGoal and PrecisionGoal but was unable to find a sufficient decrease in the merit function. You may need more than MachinePrecision digits of working precision to meet these tolerances. >>

Plus I believe that I have huge error propagations, especially because some of the steady-state variable values are very large or very small and thus further calculations with these values cause quite some errors. And when I do my control analysis calculations, only in the conditions where some of the variable values are very large or very small, the checks don't come out well; so some of the values the identity matrix that need to be zero are not, they're like 0.6 etc instead of zero which is a huge error.

I just want to set at the beginning of the notebook:
1. That I want to set all the constant parameters (in Parm and Parm2 e.g.) at a very high precision (for instance 30)
2. That I want all the calculations in the notebook to be done with high precision
3. That I want all the results to be displayed with a high precision (thus e.g. 30 decimals).

Can you guys please help me? I have been trying lots of things and breaking my head for quite some time and I gave up now. I also tried

$PreRead = (# /. s_String /; StringMatchQ[s, NumberString] && Precision@ToExpression@s == MachinePrecision :> s <> "`50." &); 

from the thread "How to set the working precision globally? $MinPrecision does not work" but that also didn't work (at least it did not display the results with all these decimals, so now i don't know for sure whether it worked or not).

I have attached my (already run) notebook file if that helps.

Thank you very much in advance,

Anne-Claire

Attachments:
6 Replies

That notebook is a mess. It's impossible to read some of the colored highlighted code (and imagine if the reader was color blind). Then there are redefeinitions of all sorts of things. If you want serious attention put into your question, you'll need to do a much better job of pruning and isolating the problem(s).

I will recommend three things.

(1) Make the functions R1 etc. only defined on explicit `NumberQ arguments. This would be done (after quitting or at least clearing the current definitions) as

R1[V_?NumberQ, Kms1_?NumberQ, Kms8_?NumberQ, Kmp1_?NumberQ, 
 Kmp7_?NumberQ, Keq_?NumberQ, S1_?NumberQ, S8_?NumberQ, P1_?NumberQ, 
 P7_?NumberQ] := ...

(2) Set the parameters to high precision. For the shorter list this could be done as

Parm2 = SetPrecision[{VR1 -> 0.12, VR2 -> 0.2, VR3 -> 0.2, 
    VR4 -> 0.6}, 100];

(3) Run FindRoot at higher precision than default (which is machine precision). Could be done as in the example below.

ss1 = FindRoot[
  Table[Odes[[i, 2]] == 0, {i, 1, Length[Odes]}] /. RateEqs /. 
     CoAMATX /. Parm /. Parm2, t1, WorkingPrecision -> 50]

If you have further trouble I will advise that you pay serious attention to the first remarks when formulating any follow-up question(s). This notebook is way too long to go through, just to guess at what might be the problematic areas.

POSTED BY: Daniel Lichtblau

You should also try the changes I suggested. I did, and did not run into the problems mentioned in the first post.

The analysis of error propagation will still need to be done, of course. But at least you'll get further along in the computations to the point where that can proceed.

POSTED BY: Daniel Lichtblau

Dear Daniel,

I am sorry for the notebook. We have been thinking very hard to find a way to simplify the model. And this version is already a very simplified and modularized model. In the actual model, there are more reactions, namely 56 in total instead of the 4 I have here.

The thing is that we are modeling a complex metabolic pathway in the rat mitochondria. Mitochondria are the ATP-producing organelles in a rat cell. This metabolic pathway contains a lot of loops and uses of the same enzyme for several different necessary reactions. The pathway contains 11 enzymes. And for instance 5 of the 11 enzymes catalyze 7 reactions. (So in fact 57 + 35 + 13 + 11 + 1*2 = 56 reactions catalyzed by 11 enzymes). For each reaction, other parameters values need to be used, however the reaction rate equation for each enzyme is the same (except one of the enzymes). That is why we have split the definition of the reaction rate equations in A) unique functions and B) the rate equations (RateEqs) for each separate reaction, that calls the corresponding function. In the large model that contains many more reactions per enzyme it will become more understandable why we did this split. But you are right, here this split is not necessary, however this way it is more easy for me to change the parameters I put in each of the RateEqs list.

By giving the parameters a name related to the actual kinetic parameters in question, we can more easily recognize which parameter we need to change its constant value if needed. So for us this makes the notebook more readable.

The colors were just for me to remember what I changed and I am sorry for that.

For the reader I will change the code and resubmit as soon as possible.

Thank you,

Anne-Claire

Dear Daniel,

Thank you for looking further at my problem. I have indeed added the precision code for the parameter lists and the ssp calculation. It did indeed resolve the warning I had before. :-) Thank you. An increase in precision has not resulted in any change in my Identity matrix for the larger k values (the result with which I check whether all my calculations are ok). And I have yet also not found any mistake in my code. So still only for the situations where some of the variables are very small and very large (which occurs at the low k values) I do not obtain a perfect Identity matrix. I will continue working on it, simplifying and checking it.

When I have a better readable script I will resubmit and give an update.

Kind regards,

Anne-Claire

You could be running into ill conditioned problems. At which point you may need to think through carefully the formulation to see if the results make sense (sometimes they do, even if it seems that error terms are large). Also it might be possible that rescaling of variables will help.

POSTED BY: Daniel Lichtblau

Thank you very much! I will discuss this with my professor and let other students in my group recheck my code. (I might be missing something since I have been looking at it so much). I will keep you posted.

Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract