You can edit almost every page by Creating an account. Otherwise, see the FAQ.

Constrained minimum criterion

Script error: No such module "Draft topics". Script error: No such module "AfC topic".

In statistics, the Constrained Minimum Criterion (CMC) is a criterion for selecting regression models founded on the classical theory of likelihood based inference. It is a frequentist alternative to the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) with certain advantages.

Geometric motivation

For a full regression model with ${\displaystyle p}$ predictor variables and an intercept, the unknown vector of regression parameters ${\displaystyle {\boldsymbol {\beta }}^{t}}$ is a ${\displaystyle (p+1)}$-vector. Elements of ${\displaystyle {\boldsymbol {\beta }}^{t}}$ corresponding to active variables are non-zero, and elements corresponding to inactive variables are all zero. The likelihood ratio confidence region for ${\displaystyle {\boldsymbol {\beta }}^{t}}$ is centred on its maximum likelihood estimator ${\displaystyle {\hat {\boldsymbol {\beta }}}}$. As the sample size ${\displaystyle n}$ goes to infinity, the confidence region shrinks in size and degenerates into ${\displaystyle {\hat {\boldsymbol {\beta }}}}$, and ${\displaystyle {\hat {\boldsymbol {\beta }}}}$ converges to ${\displaystyle {\boldsymbol {\beta }}^{t}}$. It follows that the whole confidence region converges to ${\displaystyle {\boldsymbol {\beta }}^{t}}$, so for sufficiently large ${\displaystyle n}$, elements of vectors in the confidence region corresponding to active variables are all non-zero. This implies that when ${\displaystyle {\boldsymbol {\beta }}^{t}}$ is captured by the confidence region, it is a vector in the region having the most zeros in its elements. Because of this, the CMC chooses from the confidence region a vector with the most zeros in its elements as an estimate of ${\displaystyle {\boldsymbol {\beta }}^{t}}$, thereby selecting the model defined by variables corresponding to non-zero elements of the chosen vector.

Definition

Let ${\displaystyle {\cal {M}}=\{{M}_{j}\}_{j=1}^{2^{p}}}$ be the collection of ${\displaystyle 2^{p}}$ subsets of the ${\displaystyle p}$ variables where each ${\displaystyle M_{j}}$ represents a subset. Denote by ${\displaystyle {\hat {\boldsymbol {\beta }}}_{j}}$ the maximum likelihood estimator for the vector of regression parameters of the reduced model defined by ${\displaystyle M_{j}}$. Augment ${\displaystyle {\hat {\boldsymbol {\beta }}}_{j}}$ to be of dimension ${\displaystyle (p+1)}$ by adding zeros to its elements to represent variables not in ${\displaystyle M_{j}}$. For a fixed ${\displaystyle \alpha \in (0,1)}$, denote by ${\displaystyle {\cal {C}}_{1-\alpha }}$ the ${\displaystyle 100(1-\alpha )\%}$ likelihood ratio confidence region for ${\displaystyle {\boldsymbol {\beta }}^{t}}$ which is a region in the ${\displaystyle (p+1)}$-dimensional space centred on ${\displaystyle {\hat {\boldsymbol {\beta }}}}$. The CMC chooses the model represented by the solution vector of the following constrained minimization problem,

${\displaystyle \min _{\cal {M}}\|{\hat {\boldsymbol {\beta }}}_{j}\|_{0}{\text{ subject to }}{\hat {\boldsymbol {\beta }}}_{j}\in {\cal {C}}_{1-\alpha },}$

where ${\displaystyle \|\cdot \|_{0}}$ denotes the ${\displaystyle L_{0}}$ norm. The solution vector is called the CMC solution, which is a sparse estimator of ${\displaystyle {\boldsymbol {\beta }}^{t}}$. Its corresponding model is called the CMC selection. When there are two or more solution vectors to the minimization problem, the one with the highest likelihood is chosen to be the CMC solution.[1]

Asymptotic properties

Let ${\displaystyle {\hat {\boldsymbol {\beta }}}_{\alpha }}$ be the CMC solution and ${\displaystyle {\hat {M}}_{\alpha }}$ be the corresponding CMC selection. Under regularity conditions for the asymptotic normality of the maximum likelihood estimator ${\displaystyle {\hat {\boldsymbol {\beta }}}}$, (${\displaystyle i}$) the CMC solution is consistent in that

${\displaystyle {\hat {\boldsymbol {\beta }}}_{\alpha }{\stackrel {p}{\longrightarrow }}{\boldsymbol {\beta }}^{t}}$

as ${\displaystyle n\rightarrow \infty }$, and (${\displaystyle ii}$) the probability that ${\displaystyle {\hat {M}}_{\alpha }}$ is the true model has an asymptotic lower bound

${\displaystyle \lim _{n\rightarrow +\infty }P({\hat {M}}_{\alpha }=M_{j}^{t})\geq 1-\alpha ,}$

where ${\displaystyle M_{j}^{t}}$ denotes the unknown true model containing only and all active variables.

Tuning parameter

The tuning parameter ${\displaystyle \alpha }$ controls the balance between the false active rate and false inactive rate of the selected model which is also the balance between the fit and the sparsity of the selected model. When the sample size ${\displaystyle n}$ is large, the asymptotic lower bound in (${\displaystyle ii}$) shows that setting ${\displaystyle \alpha }$ to a small value will lead to a high probability that the CMC selection is the true model. When ${\displaystyle n}$ is not large, a small ${\displaystyle \alpha }$ will lead to a high false inactive rate, so a larger value should be used. The recommended default value is ${\displaystyle \alpha =0.5}$. At this default value, the CMC is often more accurate than the AIC and BIC in terms of false active rate and false inactive rate.

The tuning parameter ${\displaystyle \alpha }$ makes it easy to adapt the CMC to special situations such as when ${\displaystyle n}$ is small. The AIC and BIC both require special adjustments to their penalty terms for small ${\displaystyle n}$ situations. The CMC can handle such situations with a simple change of the ${\displaystyle \alpha }$ level. In asymptotic properties (${\displaystyle i}$) and (${\displaystyle ii}$) above, the ${\displaystyle \alpha }$ level is fixed. Stronger results may be obtained by allowing ${\displaystyle \alpha }$ to vary with ${\displaystyle n}$. For selecting Gaussian linear models, one may let ${\displaystyle \alpha }$ go to zero at a certain speed depending on ${\displaystyle n}$ as ${\displaystyle n}$ goes to infinity so that the CMC selection is consistent;[2] that is, one may find a sequence of tuning parameter values ${\displaystyle \alpha _{n}\rightarrow 0}$ such that

${\displaystyle \lim _{n\rightarrow +\infty }P({\hat {M}}_{\alpha _{n}}=M_{j}^{t})=1.}$

Computation

For the best subset selection, the AIC and BIC require the computation of the maximum likelihood of all ${\displaystyle 2^{p}}$ models. The CMC may require far fewer. Denote by ${\displaystyle M_{-i}}$ the model containing all variables except the ${\displaystyle i}$th variable ${\displaystyle \mathbf {x} _{i}}$. Denote by ${\displaystyle {\hat {\boldsymbol {\beta }}}_{-i}}$ the maximum likelihood estimator and by ${\displaystyle \lambda ({\hat {\boldsymbol {\beta }}}_{-i})}$ the maximum log-likelihood ratio of this model. In some cases, the value of ${\displaystyle \lambda ({\hat {\boldsymbol {\beta }}}_{-i})}$ alone is sufficient to determine if ${\displaystyle \mathbf {x} _{i}}$ will be selected by the CMC. One can first compute ${\displaystyle \lambda ({\hat {\boldsymbol {\beta }}}_{-i})}$ and use it to determine if ${\displaystyle \mathbf {x} _{i}}$ will be selected for ${\displaystyle i=1,2,\dots ,p}$. Suppose this has identified ${\displaystyle p'}$ variables that will be selected. Then, one only needs to select from the remaining ${\displaystyle p-p'}$ variables. The total number of models that need to be computed by the CMC is thus ${\displaystyle p+2^{p-p'}}$ which could be substantially smaller than ${\displaystyle 2^{p}}$ required by the AIC and BIC.

Remarks

Comprehensive discussions of model selection philosophies and criteria can be found in the literature. [3] [4] [5] In other model selection strategies such as the AIC and BIC, the sparsity of the selected model comes as a by-product of the model selection process. By directly minimizing the size of the model subject to a lower bound constraint on the likelihood ratio, the CMC is the first model selection method to explicitly pursue the sparsity of the selected model.

References

1. Tsao, Min (2023). "Regression model selection via log-likelihood ratio and constrained minimum criterion". Canadian Journal of Statistics. arXiv:2107.08529. doi:10.1002/cjs.11756. Unknown parameter |s2cid= ignored (help)
2. Tsao, Min (2021). "A constrained minimum method for model selection". Stat. 10. doi:10.1002/sta4.387. Unknown parameter |s2cid= ignored (help)
3. Ding, Jie; Tarokh, Vahid; Yang, Yuhong (2018). "Model Selection Techniques: An Overview". IEEE Signal Processing Magazine. 35 (6): 16–34. arXiv:1810.09583. Bibcode:2018ISPM...35f..16D. doi:10.1109/MSP.2018.2867638. Unknown parameter |s2cid= ignored (help)
4. Kadane, J.B.; Lazar, N.A. (2004). "Methods and criteria for model selection". Journal of the American Statistical Association. 99 (465): 279–290. doi:10.1198/016214504000000269. Unknown parameter |s2cid= ignored (help)
5. Miller, Alan (2019). Subset selection in regression (2nd ed.). Chapman & Hall. ISBN 9780367396220. Search this book on