The leave-one-out cross-validation statistic is given by $$ \text{CV} = \frac{1}{N} \sum{i=1}^N e{[i]}^2, $$ where ${e{[i]} = y{i} - \hat{y}{[i]}} $, the observations are given by $y{1},\dots,y{N}$, and $\hat{y}{[i]}$ is the predicted value obtained when the model is estimated with the $i\text{th}$ case deleted. This is also sometimes known as the PRESS (Prediction Residual Sum of Squares) statistic. It turns out that for linear models, we do not actually have to estimate the model $N$ times, once for each omitted …