Ohio State nav bar

Seminar: Lo-Bin Chang

Statistics Seminar Series
February 24, 2015
All Day
209 W. Eighteenth Ave. (EA), Room 170

Title

Tracking Cross-Validated Estimates of Prediction Error as Studies Accumulate

Speaker

Lo-Bin Chang, Johns Hopkins University

Abstract

In recent years “reproducibility” has emerged as a key factor in evaluating applications of statistics to the biomedical sciences, for example learning predictors of disease phenotypes from high-throughput “omics” data. In particular, “validation” is undermined when error rates on newly acquired data are sharply higher than those originally reported. More precisely, when data are collected from m “studies” representing possibly different sub-phenotypes, more generally different mixtures of sub-phenotypes, the error rates in cross-study validation (CSV) are observed to be larger than those obtained in ordinary randomized cross-validation (RCV), although the “gap” seems to close as m increases. Whereas these findings are hardly surprising for a heterogeneous underlying population, this discrepancy is then seen as a barrier to translational research. In this talk, I will provide a statistical formulation in the large sample limit: studies themselves are modeled as components of a mixture and all error rates are optimal (Bayes) for a two-class problem. Our results cohere with the trends observed in practice and suggest what is likely to be observed with large samples and consistent density estimators, namely that the CSV error rate exceeds the RCV error rates for any m, the latter (appropriately averaged) increases with m, and both converge to the optimal rate for the whole population.