Ah, I see. Yes, in both cases you get a T-distribution, but if you derive it from the point-estimate made with LinearModelFit
(which is a frequentist method), it means something different, which is why the two results aren't identical even if they look very similar. For one, the T-distribution returned by the Bayesian method is influenced by the prior you set (which, by default, is quite uninformative but it doesn't have to be so) while in the frequentist method there is no prior at all (though sometimes there are regularizers, which serve a similar purpose). If memory serves me right, the frequentist T-distribution is the sampling distribution of the parameters over a long run of (imagined) datasets sampled from the currently estimated model while the Bayesian T-distribution is the distribution of the knowledge of the model parameters conditional on the current dataset.
A more important difference is the method for generating predictions from the fit. You may note that the lmf["SinglePredictionBands"]
and lmf["MeanPredictionBands"]
are different from the Bayesian bands calculated from model["Posterior", "PredictiveDistribution"]
and model["Posterior", "UnderlyingValueDistribution"]
because those distributions generate credible intervals and not confidence intervals. Concretely, the Bayesian bands are typically narrower because the information in the data is used more efficiently (that's a very broad simplification, of course).
I feel like this may not really do much justice to the differences in the results and in the end there's no denying that the formulas end up looking quite similar for linear models like these. The interpretation of the results is different, though, with the Bayesian results usually being closer to how people tend to intuitively interpret statistical results. If you're interested, I feel like it would be more beneficial to have a direct chat about this sometime to avoid having to go through the whole frequentist vs Bayesian discussion here.