Final answer:
The statement is false. Cross-validation provides a direct estimate of the test error, making it advantageous over using adjusted R-square to select a model.
Step-by-step explanation:
The statement is false. Cross-validation is advantageous over using adjusted R-square to make a judgment on which model to select because it provides a direct estimate of the test error.
Adjusted R-square is a measure of the proportion of variance explained by the model, but it doesn't directly estimate the test error.
Cross-validation, on the other hand, involves splitting the data into multiple subsets and training the model on one subset while testing it on another. This allows for the estimation of the test error, providing a more reliable measure of model performance.