Mean-field Analysis of Generalization Errors

Dr. Gholamali Aminian (The Alan Turing Institute)

Wednesday, 7th February 2PM

Abstract: We propose a novel framework for exploring weak generalization error of algorithms through the lens of differential calculus on the space of probability measures. Specifically, we consider the KL-regularized empirical risk minimization problem and establish generic conditions under which the generalization error convergence rate, when training on a sample of size n, is $O(1/n)$. In the context of supervised learning with a one-hidden layer neural network in the mean-field regime, these conditions are reflected in suitable integrability and regularity assumptions on the loss and activation functions.

 

 

Back to: Institute for Financial and Actuarial Mathematics