Message Boards Message Boards

0
|
7390 Views
|
6 Replies
|
2 Total Likes
View groups...
Share
Share this post:

Alternative to MiniMaxApproximation?

Posted 5 years ago

Hi, I am trying to use MiniMaxApproximation to obtain a polynomial approximant to a certain function. I need the approximant to be used in a C++ program implemented using extended precision (long double variables) that have a relative accuracy of 10^-19. From my experience (I have successfully done this a dozen of times), in order to reach such an accuracy, I have to obtain a relative error about 10^(-27) or smaller while using MiniMaxApproximation in MATHEMATICA, otherwise the polynomial coefficients will not be sufficiently accurate. This also means that the polynomial order must be rather high (typically order = 20 suffices). Now, the problem is that computation of my function is quite expensive. In addition, the function is calculated by numerically inverting a certain Laplace transform, so that the values of the function cannot be more accurate than a prescribed number of digits (I use 40; larger number slows down the calculations significantly). After several days of calculations I have not succeeded to obtain a polynomial of order higher than 5, with a relative error close to 10^-9 only. Attempts to obtain a higher order polynomial make MATHEMATICA fail. I am not sure exactly why, because I have to do these calculations in a batch mode, so that I don't obtain any error messages. My questions are: (1) Is there any way to set the parameters in MiniMaxApproximation so that I can complete these calculations? (2) Is there any MATHEMATICA command/package alternative to MiniMaxApproximation that would work on a set of precalculated function values rather than calling the function many times? Such a variant of the algorithm should be much faster. I am attaching my program, if anyone would be so kind to examine it. Leslaw

POSTED BY: Leslaw Bieniasz
6 Replies

The integral of a Chebyshev series may be expressed as another Chebyshev series using the identities here supplemented with

Integrate[ChebyshevT[0, x], x] == ChebyshevT[1, x]

and

Integrate[ChebyshevT[1, x], x] == (ChebyshevT[2, x] + ChebyshevT[0, x])/4.

POSTED BY: John Doty

I am aware of these problems, but I would prefer to obtain fitted polynomials using power basis, because I need to process them further (analytically integrate) in my program. Chebyshev or other basis would be less convenient from this point of view. I believe I should be able to get useful results using LinearModelFit, but I don't understand the technical problems described in my former message. I also do not find any relevant information about the error messages I obtain, on the web. So, perhaps someone can tell me what happens and how to successfully accomplish the calculations. Leslaw.

POSTED BY: Leslaw Bieniasz

The Remes algorithm used by MiniMaxApproximation is notoriously slow, and polynomials represented by power series are notoriously ill-conditioned. For these problems, I like to use Chebyshev polynomials as a basis. It is common for a fitted Chebyshev series to yield almost as good a result as a minimax polynomial. It generally doesn't suffer from the "higher error at the ends" phenomenon. Rescale your independent variable so that its range is (-1,1) and then use ChebyshevT[n,x] with n equal 1, 2, 3, ... as your set of functions for LinearModelFit.

A good numerical library will have fast, accurate functions to calculate Chebyshev polynomials, so the results should transfer well to code in other languages.

POSTED BY: John Doty

Thanks, it seems that this method indeed works much faster, and from my preliminary calculations I see that it may produce satisfactory results for my purposes. However, it is well known that least squares fits yield large residuals at the ends of the intervals, whereas minimax gives uniform residuals. Hence, least squares fits may not be optimal. In any case, I experience strange behavior of LinearModelFit. Although I specify the polynomial degree, I notice that the polynomials obtained are not necessarily of that order; the actual order is often smaller. Is this normal? Why this happens? Another problem is that in some cases I obtain error messages that some tensors are of incompatible shape, and the algorithm does not output a polynomial. For the attached code this happens with approx6, approx7 and some further ones. What I am doing wrong? Leslaw

Attachments:
POSTED BY: Leslaw Bieniasz

For your function F[x], try using LinearModelFit on a set of points on the function. Note F[x] runs much faster with Real input values of high precision compared to fractional values.

pnts = Table[{x, F[x]}, {x, N[5/100, ndigits], N[1/4, ndigits], N[1/400, ndigits]}];

mons = NestList[(#*x) &, x, 20]; (* monomials for the polynomial fit *)

lm = LinearModelFit[pnts, mons, x];

ListPlot[lm["FitResiduals"],Filling->Axis,PlotRange->All];

Fit residuals

You can increase nDigits and the number of terms in the polynomial as needed to get the precision you need. I hope that you find this helpful.

POSTED BY: John McGee

For your function F[x], try using LinearModelFit on a set of points on the function. Note F[x] runs much faster with Real input values of high precision compared to fractional values.

pnts = Table[{x, F[x]}, {x, N[5/100, ndigits], N[1/4, ndigits], N[1/400, ndigits]}];

mons = NestList[(#*x) &, x, 12]; (* monomials for the polynomial fit *)

lm = LinearModelFit[pnts, mons, x];

ListPlot[lm["FitResiduals"],Filling->Axis,PlotRange->All];

Fit Residuals

You can increase nDigits and the number of terms in the polynomial as needed to get the precision you need. I hope that you find this helpful.

POSTED BY: John McGee
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
Attachments
Remove
or Discard

Group Abstract Group Abstract