OED (optimal experimental design) acceleration via deep learning

Various real-world scientific applications involve the mathematical modeling of complex uncertain systems with numerous unknown parameters. Accurate parameter estimation is often practically infeasible in such systems, as the available training data may be insufficient and the cost of acquiring additional data may be very high. In such cases, it may be desirable to represent the uncertainty present in the model in a Bayesian paradigm, based on which one may design robust operators that retain good overall performance across all possible models. Furthermore, one may design optimal experiments that can effectively reduce the uncertainty so as to significantly enhance the performance of the robust operators.

While objective-based uncertainty quantification (objective-UQ) based on MOCU (mean objective cost of uncertainty) provides an effective means for quantifying uncertainty in complex systems, the high computational cost of estimating MOCU has been a practical challenge in applying it to real-world scientific/engineering problems.

In our recent work, we proposed a novel deep learning (DL) scheme to reduce the computational cost for objective-UQ via MOCU that can significantly accelerate MOCU estimation and optimal experimental design (OED).

Qihua Chen, Xuejin Chen, Hyun-Myung Woo, Byung-Jun Yoon, “Neural Message Passing for Objective-Based Uncertainty Quantification and Optimal Experimental Design,” Engineering Applications of Artificial Intelligence, Volume 123, Part A, 106171, 2023, https://doi.org/10.1016/j.engappai.2023.106171

In the above study, we trained a message-passing neural network (MPNN) as a surrogate MOCU estimator, incorporating a novel axiomatic constraint loss that improves the estimation performance, and ultimately, the OED outcomes. Our results show that the proposed scheme can accelerate MOCU-based OED by four to five orders of magnitude, without any visible performance loss.

For further details, the paper can be accessed at: [download paper]

Optimal Bayesian transfer learning for enhancing error estimation under data scarcity

Our recent study on Bayesian error estimation via optimal Bayesian transfer learning has been published in Patterns, a premium open access journal published by Cell Press.

Omar Maddouri, Xiaoning Qian, Francis J. Alexander, Edward R. Dougherty, Byung-Jun Yoon, “Robust Importance Sampling for Error Estimation in the Context of Optimal Bayesian Transfer Learning,” PatternsDOI:https://doi.org/10.1016/j.patter.2021.100428.

Continue reading “Optimal Bayesian transfer learning for enhancing error estimation under data scarcity”

Efficient active learning for Gaussian process classification

Our NeurIPS 2021 paper entitled “Efficient Active Learning for Gaussian Process Classification by Error Reduction” is now available online at the following link: https://openreview.net/pdf?id=UK15Hj9qX6I

Guang Zhao, Edward Dougherty, Byung-Jun Yoon, Francis Alexander, Xiaoning Qian, “Efficient Active Learning for Gaussian Process Classification by Error Reduction,” Thirty-Fifth Conference on Neural Information Processing Systems (NeurIPS 2021), Dec. 6 – 14, 2021.

In this paper, we investigate active learning scenarios for Gaussian Process Classification (GPC), where we develop computationally efficient algorithms for EER (expected error reduction)-based active learning with GPC. In particular, we consider EER as the reduction of the Mean Objective Cost of Uncertainty (MOCU), where the learning objective of GPC is to reduce the classification error.

Our experiments clearly demonstrate the computational efficiency of the proposed approach and performance evaluation of our algorithms on both synthetic and real-world datasets show that they significantly outperform existing state-of-the-art algorithms in terms of sampling efficiency.

How does model uncertainty affect multi-objective optimization?

Various real-world applications involve modeling complex systems with immense uncertainty and optimizing multiple objectives based on the uncertain model. Being able to quantify the impact of such model uncertainty on the operational objectives of interest is critical, for example, to design optimal experiments that can most effectively reduce the uncertainty that affect the objectives pertinent to the application at hand. In fact, such objective-based uncertainty quantification (objective-UQ) has been shown to be much more efficient for optimal experimental design (OED) compared to other approaches that do not explicitly aim at reducing the “uncertainty that actually matters”.

The concept of MOCU (mean objective cost of uncertainty) provides an effective means to quantify this objective uncertainty, but its original definition was limited to the case of single objective operations.

In our recent paper, we extend the original MOCU to propose the mean multi-objective cost of uncertainty (multi-objective MOCU), which can be used for objective-based quantification of uncertainty for complex uncertain systems considering multiple operational objectives:

Byung-Jun Yoon, Xiaoning Qian, Edward R. Dougherty, “Quantifying the multi-objective cost of uncertainty“, IEEE Access, vol.9, pp. 80351-80359, 2021, doi: 10.1109/ACCESS.2021.3085486.

Based on several examples, we illustrate the concept of multi-objective MOCU and demonstrate its efficacy in quantifying the operational impact of model uncertainty when there are multiple, possibly competing, objectives.

The multi-objective MOCU quantifies the expected performance gap between the robust multi-objective operator that needs to be used to main good performance in the presence of model uncertainty and the optimal multi-objective operator for the true (but unknown) model.

Optimal experimental design for uncertain systems based on coupled ordinary differential equations

The paper entitled “Optimal Experimental Design for Uncertain Systems Based on Coupled Differential Equations,” has been published in IEEE Access and is now accessible in the link below:

Youngjoon Hong, Bongsuk Kwon, and Byung-Jun Yoon, “Optimal Experimental Design for Uncertain Systems Based on Coupled Differential Equations,” IEEE Access, doi: 10.1109/ACCESS.2021.3071038.

In this work, a general optimal experimental design (OED) strategy is proposed for an uncertain system that is described by coupled ordinary differential equations (ODEs), whose parameters are not completely known. As a vehicle for developing the OED strategy, this study focuses on non-homogeneous Kuramoto oscillator models, where the objective is the robust control of a given uncertain Kuramoto model to achieve global frequency synchronization.

Illustrative overview of the proposed optimal experimental design (OED) framework.

The proposed OED strategy quantifies the objective uncertainty of the Kuramoto model based on the mean objective cost of uncertainty (MOCU), where the optimal experiment can be identified by predicting the best experiment in the design space that is expected to maximally reduce the MOCU.

This study highlights the importance of quantifying the operational impact of the potential experiments in designing the optimal experiment and it demonstrates that the MOCU-based OED scheme enables one to minimize the cost of robust control of a uncertain Kuramoto model with the fewest experiments compared to other alternatives.

The proposed scheme is fairly general and it can be applied to any uncertain complex system represented by coupled ODEs.

AISTATS 2021 paper entitled “Bayesian Active Learning by Soft Mean Objective Cost of Uncertainty” now available

The AISTATS 2021 paper entitled “Bayesian Active Learning by Soft Mean Objective Cost of Uncertainty” can now be accessed at the following link:

Guang Zhao, Edward Dougherty, Byung-Jun Yoon, Francis Alexander, Xiaoning Qian, “Bayesian Active Learning by Soft Mean Objective Cost of Uncertainty,” 24th International Conference on Artificial Intelligence and Statistics (AISTATS), April 13 – 15, 2021.

In this paper, a strictly concave approximation of MOCU – referred to as “Soft MOCU” – is proposed, which can be used to define an acquisition function for Bayesian active learning with a theoretical convergence guarantee. This study shows that the Soft MOCU based Bayesian active learning outperforms other existing methods, with the important additional benefit of theoretical guarantee of convergence to the optimal classifier.

ICLR 2021 paper entitled “Uncertainty-aware Active Learning for Optimal Bayesian Classifier” now available online

We are happy to announce that our ICLR 2021 paper below can now be accessed online on OpenReview.net:

Guang Zhao, Edward Dougherty, Byung-Jun Yoon, Francis Alexander, Xiaoning Qian, “Uncertainty-aware Active Learning for Optimal Bayesian Classifier,” 9th International Conference on Learning Representations (ICLR), May 4-8, 2021.

In this paper, we propose an acquisition function for active learning of a Bayesian classifier based on a weighted form of MOCU (mean objective cost of uncertainty). By quantifying the uncertainty that directly affects the classification error, the proposed method avoids the shortcoming of the previous expected Loss Reduction (ELR) methods by avoiding their myopic behavior. Unlike existing ELR methods, which may get stuck before reaching the optimal classifier, the proposed weighted-MOCU based strategy provides the critical advantage that the resulting Bayesian active learning algorithm guarantees convergence to the optimal classifier of the true model. We demonstrate its performance with both synthetic and real-world datasets.

Apply for 2021 NSF Math Sciences Graduate Internship (MSGI)

We are happy to announce the opportunity to apply for 2021 NSF Math Sciences Graduate Internship (MSGI) to work on the research project entitled: Uncertainty-Aware Data-Driven Models for Optimal Learning and Robust Decision Making Under Uncertainty. (Mentors: Drs. Nathan Urban & Byung-Jun Yoon)

This project aims to develop Scientific ML techniques that enable objective-driven uncertainty quantification (UQ) for data-driven models. We will focus on developing theories and algorithms that can ultimately lead to an automated learning procedure of effective surrogates for complex systems that can be used for making optimal decisions robust to system uncertainties and surrogate approximation errors. These goals will be attained based on a Bayesian ML paradigm, in which we integrate scientific prior knowledge on the system and the available data to obtain a prior directly characterizing the scientific uncertainty in the physical system, quantify the uncertainty relative to the objective, develop optimal operators robust to the uncertainty, and design strategies that can optimally reduce the uncertainty and thereby directly contribute to the attainment of the objective. Potential applications of this methodology will be discussed with the student, but may focus on biological and biomedical discovery science. Detailed information of this project can be found in the project catalog at the following link (search for reference code: BNL-URBAN1): https://orise.orau.gov/nsf-msgi/project-catalog.html

The NSF Mathematical Sciences Graduate Internship (MSGI) program is aimed at students who are interested in understanding the application of advanced mathematical and statistical techniques to “real world” problems, regardless of whether you plan to pursue an academic or nonacademic career. Internship activities will vary based on the assigned research project and hosting facility. As part of your application, you will identify your top 3 research projects from the 2021 NSF MSGI Project Catalog: https://orise.orau.gov/nsf-msgi/project-catalog.html

Further information about the NSF Mathematical Sciences Graduate Internship program can be found at:
https://zintellect.com/Opportunity/Details/NSF-MSGI-2021

Application Deadline: January 13, 2021 4PM Eastern Time Zone