Finding
Paper
Citations: 0
Abstract
We compare alternative methodologies for Credit Risk estimation. Here we introduce a Bayesian Model Averaging (BMA) of Logistic Regressions for Probability of Default modeling where the model space is sampled via Markov Chain Monte Carlo methods making use of the approximation of the posterior probability introduced in [Schwarz, 1978] and used in the context of sparse problems in [Sala-i Martin et al., 2004], [Gross and Poblacion, 2004] and [Torresetti, 2021]. We will compare this methodology against some popular Machine Learning ensemble methods for regression trees: Bagging (Bootstrap Aggregating) and Boosting (Adaptive Boosting, Gradient Boosting). The results on the credit risk dataset at our disposal show how Gradient Boosting proves to be the top performer but with a slow convergence speed given the cooling coecient that was used to make it less greedy. As not too distant seconds we see the performance of BMA and Adaptive Boosting where in particular BMA proves to have a faster convergence speed among the specific versions of the algorithms implemented. Finally a simple average of estimates as in Bagging proves to have the lowest performance of the ensemble methods considered.
Authors
Roberto Torresetti
Journal
Social Science Research Network