Ohio State nav bar

Dissertation Defense: Rui Zhang

OHIO
July 10, 2024
9:00AM - 11:00AM
Zoom Link Below

Date Range
2024-07-10 09:00:00 2024-07-10 11:00:00 Dissertation Defense: Rui Zhang Dissertation Defense: Rui Zhang, Statistics Ph.D. CandidateTitle:  Likelihood-free Inference via Deep Neural NetworksAbstract: Many application areas rely on models that can be readily simulated but lack a closed-form likelihood, or an accurate approximation under arbitrary parameter values. In this "likelihood-free" setting, inference is typically simulation-based and requires some degree of approximation. Recent work on using neural networks to reconstruct the mapping from the data space to the parameters from a set of synthetic parameter-data pairs suffers from the curse of dimensionality, resulting in inaccurate estimation as the data size grows. We propose new inferential techniques to overcome these limitations, beginning with a simulation-based dimension-reduced reconstruction map (RM-DR) estimation method. This approach integrates reconstruction map estimation with dimension-reduction techniques grounded in subject-specific knowledge. We examine the properties of reconstruction map estimation with and without dimension reduction, and describe the trade-off between information loss from data reduction and approximation error due to increasing input dimension of the reconstruction function. To further incorporate uncertainty quantification, we introduce kernel-adaptive synthetic posterior estimation (KASPE). This method employs a deep learning framework to learn a closed-form approximation of the exact posterior, combined with a kernel-based adaptive sampling mechanism to generate synthetic training data. We study the convergence properties of the approach and its connections with other methods. Extensive simulation studies show that our methods consistently outperform competing approaches in accuracy and robustness.Building on these advancements, we extend the RM-DR and KASPE methods by integrating a Transformer encoder into their architectures, resulting in RM-TE and KASPE-TE frameworks. Through the self-attention mechanism and tailored model layers designed for parameter inference tasks, these methods are able to capture complex dependencies and handle high-dimensional data by automatically learning and extracting relevant features. Numerical experiments confirm that RM-TE and KASPE-TE significantly improve estimation accuracy over existing methods. This approach also offers automatic, data-driven summary statistics applicable to other likelihood-free methods.Advisors: Oksana Chkrebtii, Dongbin XiuZoom Link: https://osu.zoom.us/j/97699949815?pwd=htn4885YWC5bqOjOpdYU8qC6IevPyb.1    Zoom Link Below Department of Statistics stat@osu.edu America/New_York public

Dissertation Defense: Rui Zhang, Statistics Ph.D. Candidate

Title:  Likelihood-free Inference via Deep Neural Networks

Abstract: Many application areas rely on models that can be readily simulated but lack a closed-form likelihood, or an accurate approximation under arbitrary parameter values. In this "likelihood-free" setting, inference is typically simulation-based and requires some degree of approximation. Recent work on using neural networks to reconstruct the mapping from the data space to the parameters from a set of synthetic parameter-data pairs suffers from the curse of dimensionality, resulting in inaccurate estimation as the data size grows. 

We propose new inferential techniques to overcome these limitations, beginning with a simulation-based dimension-reduced reconstruction map (RM-DR) estimation method. This approach integrates reconstruction map estimation with dimension-reduction techniques grounded in subject-specific knowledge. We examine the properties of reconstruction map estimation with and without dimension reduction, and describe the trade-off between information loss from data reduction and approximation error due to increasing input dimension of the reconstruction function. To further incorporate uncertainty quantification, we introduce kernel-adaptive synthetic posterior estimation (KASPE). This method employs a deep learning framework to learn a closed-form approximation of the exact posterior, combined with a kernel-based adaptive sampling mechanism to generate synthetic training data. We study the convergence properties of the approach and its connections with other methods. Extensive simulation studies show that our methods consistently outperform competing approaches in accuracy and robustness.

Building on these advancements, we extend the RM-DR and KASPE methods by integrating a Transformer encoder into their architectures, resulting in RM-TE and KASPE-TE frameworks. Through the self-attention mechanism and tailored model layers designed for parameter inference tasks, these methods are able to capture complex dependencies and handle high-dimensional data by automatically learning and extracting relevant features. Numerical experiments confirm that RM-TE and KASPE-TE significantly improve estimation accuracy over existing methods. This approach also offers automatic, data-driven summary statistics applicable to other likelihood-free methods.

Advisors: Oksana Chkrebtii, Dongbin Xiu

Zoom Link: https://osu.zoom.us/j/97699949815?pwd=htn4885YWC5bqOjOpdYU8qC6IevPyb.1