Paper
Parallel Inference for Latent Dirichlet Allocation on Graphics Processing Units
Published Dec 7, 2009 · Feng Yan, Ningyi Xu, Y. Qi
119
Citations
16
Influential Citations
Abstract
The recent emergence of Graphics Processing Units (GPUs) as general-purpose parallel computing devices provides us with new opportunities to develop scalable learning methods for massive data. In this work, we consider the problem of parallelizing two inference methods on GPUs for latent Dirichlet Allocation (LDA) models, collapsed Gibbs sampling (CGS) and collapsed variational Bayesian (CVB). To address limited memory constraints on GPUs, we propose a novel data partitioning scheme that effectively reduces the memory cost. This partitioning scheme also balances the computational cost on each multiprocessor and enables us to easily avoid memory access conflicts. We use data streaming to handle extremely large datasets. Extensive experiments showed that our parallel inference methods consistently produced LDA models with the same predictive power as sequential training methods did but with 26x speedup for CGS and 196x speedup for CVB on a GPU with 30 multiprocessors. The proposed partitioning scheme and data streaming make our approach scalable with more multiprocessors. Furthermore, they can be used as general techniques to parallelize other machine learning models.
Our parallel inference methods for latent Dirichlet Allocation models on GPUs result in 26x speedup for CGS and 196x speedup for CVB on a GPU with 30 multiprocessors.
Full text analysis coming soon...