Majority vote learning in PAC-bayesian theory: state of the art and novelty

Paul Viallard
Laboratoire Hubert Curien, Data Intelligence Team, Saint-Étienne

Date(s) : 18/06/2021   iCal
14 h 30 min - 15 h 30 min

In machine learning, ensemble methods are ubiquitous: Boosting, Bagging, Support Vector Machine or Random Forest are famous examples. Here we focus on models expressed as a weighted majority vote. The objective is then to learn a majority vote where its performance is guaranteed on new unseen data. Such guarantee can be estimated with PAC (Probably Approximately Correct) guarantees, a.k.a. generalization bounds, that is obtained by upper-bounding the risk that the majority vote makes an error (through the 0-1 loss). One statistical machine learning theory to provide such bounds in the context of majority votes is the PAC-Bayesian framework. The PAC-Bayesian framework has the advantage to offer bounds that can be optimizable: learning algorithms can be derived. However, a major drawback of this framework is that the classical bounds do not directly provide bounds on the majority vote risk: one has to use a (non-precise) surrogate of the 0-1 loss. In this talk, we recall the state-of-the-art learning algorithms based on PAC-Bayesian bound minimizations. Moreover, we will introduce 3 contributions that allow us to obtain majority votes with precise guarantees: (1) We introduce 3 algorithms based on a surrogate of the PAC-Bayesian literature called the C-Bound. Our minimization procedures lead to tight generalization bounds since they directly minimize PAC-Bayesian Bounds. (2) We discuss the use of another kind of bound in the context of majority vote learning, called “disintegrated PAC-Bayesian bound”. (3) We introduce a way to learn a stochastic majority vote (with weights sampled from a Dirichlet distribution) where its guarantee is directly on the majority vote risk: it does not require the use of a surrogate (like the C-Bound).



Retour en haut 

Secured By miniOrange