Hi,
How does Mathematica compute the Standard Errors for parameters?
To give an example, let's take the stock prices for IBM and S&P500, and transform them in continuous-time returns:
data = Import["http://faculty.chicagobooth.edu/ruey.tsay/teaching/fts3/m-ibmsp2608.txt", "Table"];
{IBM, SP500} = Log[data[[2 ;;, 2 ;;]] + 1]\[Transpose];
I want to regress current returns of IBM on current S&500 and 1-period lagged returns of these assets. So, I define the dependent variable Y and the independent variable X as follows:
Y = Drop[IBM, 1];
X = {ConstantArray[1, Length[Y]], Drop[SP500, 1], Drop[IBM, -1], Drop[SP500, -1]}\[Transpose];
The parameters of the regression is very simple to get:
\[Beta] = Inverse[X\[Transpose].X].(X\[Transpose].Y)
And they correspond to the ones obtained from the LinearModelFit:
lm = LinearModelFit[Append[X[[All, 2 ;;]]\[Transpose], Y]\[Transpose], {x1, x2, x3}, {x1, x2, x3}]
lm["BestFitParameters"] == \[Beta]
Out[]= True
However my Standard Errors are slightly different.
Letting epsilon to denote the residuals, i.e.
\[Epsilon] = Y - X.\[Beta];
standard errors can be obtained in the following way:
\[Epsilon].\[Epsilon]/(Length[Y] - 1) Inverse[X\[Transpose].X] // Diagonal // Sqrt
Out[]= {0.00172569, 0.0308209, 0.0316489, 0.0403108}
LinearModelFit standard errors on the other hand are:
lm["ParameterErrors"]
Out[]= {0.00172916, 0.0308831, 0.0317127, 0.0403921}
I could not find the source of this difference.