Go top
Conference paper information

Variational Linearized Laplace Approximation for bayesian deep learning

L.A. Ortega, S. Rodríguez-Santana, D. Hernández-Lobato

41st International Conference on Machine Learning - ICML 2024, Vienna (Austria). 21-27 July 2024


Summary:

The Linearized Laplace Approximation (LLA) has been recently used to perform uncertainty estimation on the predictions of pre-trained deep neural networks (DNNs). However, its widespread application is hindered by significant computational costs, particularly in scenarios with a large number of training points or DNN parameters. Consequently, additional approximations of LLA, such as Kronecker-factored or diagonal approximate GGN matrices, are utilized, potentially compromising the model’s performance. To address these challenges, we propose a new method for approximating LLA using a variational sparse Gaussian Process (GP). Our method is based on the dual RKHS formulation of GPs and retains as the predictive mean the output of the original DNN. Furthermore, it allows for efficient stochastic optimization, which results in sub-linear training time in the size of the training dataset. Specifically, its training cost is independent of the number of training points. We compare our proposed method against accelerated LLA (ELLA), which relies on the Nyström approximation, as well as other LLA variants employing the sample-then-optimize principle. Experimental results, both on regression and classification datasets, show that our method outperforms these already existing efficient variants of LLA, both in terms of the quality of the predictive distribution and in terms of total computational time.


Spanish layman's summary:

La Aproximación Linealizada de Laplace (LLA) estima incertidumbre en redes neuronales, pero es costosa. Proponemos usar un GP variacional sparse que conserva la media predictiva de la red y permite un entrenamiento sublineal, superando a otras variantes de LLA.


English layman's summary:

The Linearized Laplace Approximation (LLA) helps estimate uncertainty in deep neural networks but is computationally costly. To improve efficiency, we propose using a variational sparse Gaussian Process (GP) that retains the DNN's predictive mean and enables sub-linear training time, outperforming other LLA variants.


Keywords: Gaussian Processes, Linearized Laplace Approximation (LLA), Post-hoc approximation


Published in Proceedings of Machine Learning Research, vol: 235, pp: 38815-38836

Publication date: 2024-07-27.



Citation:
L.A. Ortega, S. Rodríguez-Santana, D. Hernández-Lobato, Variational Linearized Laplace Approximation for bayesian deep learning, 41st International Conference on Machine Learning - ICML 2024, Vienna (Austria). 21-27 July 2024. In: Proceedings of Machine Learning Research, vol. 235, e-ISSN: 2640-3498