Sparse model selection is ubiquitous from linear regression to graphical
models where regularization paths, as a family of estimators upon the
regularization parameter varying, are computed when the regularization
parameter is unknown or decided data-adaptively. Traditional computational
methods rely on solving a set of optimization problems where the regularization
parameters are fixed on a grid that might be inefficient. In this paper, we
introduce a simple iterative regularization path, which follows the dynamics of
a sparse Mirror Descent algorithm or a generalization of Linearized Bregman
Iterations with nonlinear loss. Its performance is competitive to
texttt{glmnet} with a further bias reduction. A path consistency theory is
presented that under the Restricted Strong Convexity (RSC) and the
Irrepresentable Condition (IRR), the path will first evolve in a subspace with
no false positives and reach an estimator that is sign-consistent or of minimax
optimal $ell_2$ error rate. Early stopping regularization is required to
prevent overfitting. Application examples are given in sparse logistic
regression and Ising models for NIPS coauthorship.

Source link