While Bayesian inference for neural networks, or Bayesian deep learning, has the potential to provide well-calibrated predictions with quantified uncertainty and robustness, the main hurdle for Bayesian deep learning is its computational complexity. Due to the high dimensionality of the parameter space, Bayesian inference tends to be computationally expensive even for relatively small neural networks. In our recent paper entitled “Learning Active Subspaces for Effective and Scalable Uncertainty Quantification in Deep Neural Networks”, we proposed a novel scheme that addresses this limitation by constructing a low-dimensional subspace of the neural network parameters – referred to as an “active subspace” – by identifying the parameter directions that have the most significant influence on the output of the neural network. In this work, we demonstrated that this significantly reduced active subspace enables effective and scalable Bayesian inference via either Monte Carlo (MC) sampling methods or variational inference. Empirically, our approach provides reliable predictions with robust uncertainty estimates for various regression tasks.

For further information, please refer to the full paper available at:

https://ieeexplore.ieee.org/abstract/document/10448265

Leave a comment