{"id":702,"date":"2018-01-27T14:36:31","date_gmt":"2018-01-27T14:36:31","guid":{"rendered":"http:\/\/jsr.isrt.ac.bd\/?post_type=article&p=702"},"modified":"2018-01-27T14:36:46","modified_gmt":"2018-01-27T14:36:46","slug":"shrinkage-selection-anova-model","status":"publish","type":"article","link":"http:\/\/jsr.isrt.ac.bd\/article\/shrinkage-selection-anova-model\/","title":{"rendered":"On shrinkage and selection: ANOVA model"},"content":{"rendered":"
This paper considers the estimation of the parameters of an ANOVA model
\nwhen sparsity is suspected. Accordingly, we consider the least square estima-
\ntor (LSE), restricted LSE, preliminary test and Stein-type estimators, together
\nwith three penalty estimators, namely, the ridge estimator, subset selection rules
\n(hard threshold estimator) and the LASSO (soft threshold estimator). We com-
\npare and contrast the L2-risk of all the estimators with the lower bound of L2-
\nrisk of LASSO in a family of diagonal projection scheme which is also the lower
\nbound of the exact L2-risk of LASSO. The result of this comparison is that nei-
\nther LASSO nor the LSE, preliminary test, and Stein-type estimators outperform
\neach other uniformly. However, when the model is sparse, LASSO outperforms
\nall estimators except “ridge” estimator since both LASSO and ridge are L2-risk
\nequivalent under sparsity. We also \fnd that LASSO and the restricted LSE are
\nL2-risk equivalent and both outperform all estimators (except ridge) depending
\non the dimension of sparsity. Finally, ridge estimator outperforms all estimators
\nuniformly. Our \fnding are based on L2-risk of estimators and lower bound of the
\nrisk of LASSO together with tables of efficiency and graphical display of efficiency
\nand not based on simulation.<\/p>\n