28: Improvement of Bayesian Personalized Ranking inference using AWSGLD algorithm

Sooyoung Cheon Co-Author
 
Ah-Rim Joo First Author
 
Ah-Rim Joo Presenting Author
 
Tuesday, Aug 5: 10:30 AM - 12:20 PM
1840 
Contributed Posters 
Music City Center 
User purchase history or rating data often suffer from biases and sparsity. To overcome this problem, Bayesian personalized ranking (BPR; Rendle et al., 2009) leverages statistical techniques to analyze data that reflects user preferences inferred from behavioral history, capitalizing on extensive feedback data that is typically large-scale yet sparse in nature. The traditional BPR algorithm employs stochastic gradient descent (SGD) due to computational simplicity and ease of implementation. However, SGD struggles with inefficiencies when optimizing anisotropic functions, where gradients vary by direction. To overcome this limitation, this study proposes optimizing the BPR posterior distribution using the adaptively weighted stochastic gradient Langevin dynamics (AWSGLD; Deng et al., 2022) algorithm, which is highly scalable and capable of self-adjustment within the sample space. Additionally, we explore the application of the adaptively weighted technique to stochastic gradient Nose-Hoover thermostat (SGNHT; Ding et al., 2014). Empirical analyses demonstrate that the proposed AWSGMCMC-based BPR algorithms significantly outperform traditional recommendation methods, highlighting their potential to enhance recommendation accuracy.

Keywords

Personalized recommendation algorithm

Bayesian Personalized Ranking

adaptively weighted stochastic gradient MCMC

Implicit data 

Main Sponsor

Section on Statistical Computing