
Title
Bayesian Nonparametric Model Selection and Model Testing
Speaker
George Karabatsos, University of Illinois-Chicago
Abstract
This presentation examines a general Bayesian nonparametric approach to model selection and model testing, which is fully justified from the perspective of Bayesian decision theory, and is useful for evaluating the predictive-utility of a set of models {M_d} that are either probabilistic (Bayesian or classical-frequentist), or even deterministic. In this approach, conditional on an observed set of data x^n={x_1,...,x_n} that arises as a random sample from some unknown true sampling density f_0, a consistent posterior estimate fn of the true density f_0 is obtained on the basis of a nonparametric prior specified to give positive support to the entire set of possible sampling densities {f}. Then the "best" model from {M_d} is decided as the model M_d that predicts a sampling density fnd that is nearest in Kullback- Liebler divergence from the true sampling density f_0 (estimated by f_n). Furthermore the decision is made to reject any given model M_d when it predicts a sampling density f_{nd} that significantly diverges from the true sampling density f_0 (estimated by f_n), where "significance" is determined by a calibration of the Kullback- Liebler divergence. This presentation also discusses the advantages of Bayesian nonparametric approach over all other types of model selection approaches, and over any model testing procedure that depends on interpreting a p-value. The Bayesian nonparametric approach is illustrated on real data sets for the comparison and testing of models that are relevant to mathematical psychology.
Meet the speaker in Room 212 Cockins Hall at 4:30 p.m. Refreshments will be served.