Bayesian Penalized Regression (and a little MCMC)
Galin Jones, University of Minnesota
I will consider ordinary least squares, lasso, bridge, and ridge regression methods under a unified framework. The particular method is determined by the form of the penalty term, which is typically chosen by cross validation. The goal is to introduce a fully Bayesian approach which allows selection of the penalty through posterior inference if desired and discuss how to use a type of model averaging approach to eliminate the nuisance penalty parameters. Sufficient conditions for the posterior to concentrate near the true regression coefficients as the dimension grows with sample size will be discussed.
The resulting posterior is analytically intractable and requires a component-wise Markov chain Monte Carlo algorithm. The MCMC estimation problem is highly multivariate, an issue which has been largely ignored in the MCMC literature. A new relative-volume simulation termination rule will be introduced and connected to a new concept of effective sample size. This allows termination of the simulation in a principled manner.
Numerical results show that the proposed model and MCMC method tends to select the optimal penalty and performs well in both variable selection and prediction. Both simulated and real data examples will be provided.
Note: Seminars are free and open to the public.