
Speaker: Eunseop Kim
Title: Regularized exponentially tilted empirical likelihood for Bayesian inference
Abstract: Bayesian inference with empirical likelihood faces a challenge where the posterior domain is a proper subset of the original parameter space due to the convex hull constraint with finite data. To address this issue, we propose regularized exponentially tilted empirical likelihood. Our method removes the convex hull constraint using a novel regularization technique, incorporating a continuous exponential family distribution to satisfy a Kullback-Leibler divergence criterion. The regularization arises as a limiting procedure where pseudo-data are added to the formulation of exponentially tilted empirical likelihood in a disciplined way. We show that regularized exponentially tilted empirical likelihood retains desirable asymptotic properties of (exponentially tilted) empirical likelihood. Simulation and data analysis demonstrate that the proposed method provides a suitable pseudo-likelihood for Bayesian inference.
Speaker: Yoonji Kim
Title: Sequential Bayesian Registration for Functional Data
Abstract: In many modern applications, discretely-observed data may be naturally understood as a set of functions. Functional data often exhibit two confounded sources of variability: amplitude (y-axis) and phase (x-axis). The extraction of amplitude and phase, a process known as registration, is essential in exploring the underlying structure of functional data in a variety of areas, from environmental monitoring to medical imaging. Critically, such data are often gathered sequentially with new functional observations arriving over time. Despite this, most available registration procedures are only applicable to batch learning, leading to inefficient computation. To address these challenges, we introduce a Bayesian framework for sequential registration of functional data, which updates statistical inference as new sets of functions are assimilated. This Bayesian model-based sequential learning approach utilizes sequential Monte Carlo sampling to recursively update the alignment of observed functions while accounting for associated uncertainty. As a result, distributed computing, which is not generally an option in batch learning, significantly reduces computational cost. Simulation studies and comparisons to existing batch learning methods reveal that the proposed approach performs well even when the target posterior distribution has a challenging structure.