首页 | 本学科首页   官方微博 | 高级检索  
     


On the use of cross-validation for the calibration of the adaptive lasso
Authors:Nadim Ballout  Lola Etievant  Vivian Viallon
Affiliation:1. Univ Lyon, Univ Eiffel, IFSTTAR, Univ Lyon 1, UMRESTTE, Bron, France;2. Univ Lyon, Univ Eiffel, IFSTTAR, Univ Lyon 1, UMRESTTE, Bron, France

Institut Camille Jordan, Université Claude Bernard Lyon 1, Lyon, France;3. Nutrition and Metabolism Branch, International Agency for Research on Cancer (IARC-WHO), Lyon, France

Abstract:Cross-validation is the standard method for hyperparameter tuning, or calibration, of machine learning algorithms. The adaptive lasso is a popular class of penalized approaches based on weighted L1-norm penalties, with weights derived from an initial estimate of the model parameter. Although it violates the paramount principle of cross-validation, according to which no information from the hold-out test set should be used when constructing the model on the training set, a “naive” cross-validation scheme is often implemented for the calibration of the adaptive lasso. The unsuitability of this naive cross-validation scheme in this context has not been well documented in the literature. In this work, we recall why the naive scheme is theoretically unsuitable and how proper cross-validation should be implemented in this particular context. Using both synthetic and real-world examples and considering several versions of the adaptive lasso, we illustrate the flaws of the naive scheme in practice. In particular, we show that it can lead to the selection of adaptive lasso estimates that perform substantially worse than those selected via a proper scheme in terms of both support recovery and prediction error. In other words, our results show that the theoretical unsuitability of the naive scheme translates into suboptimality in practice, and call for abandoning it.
Keywords:adaptive lasso  calibration  cross-validation  one-step lasso  tuning parameter
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号