Interpretable Deep Learning with Scalable Kernel-Based Density Estimation

Mithat Gonen Co-Author
Memorial Sloan-Kettering Cancer Center
 
Ayyuce Begum Bektas First Author
Memorial Sloan Kettering Cancer Center
 
Ayyuce Begum Bektas Presenting Author
Memorial Sloan Kettering Cancer Center
 
Monday, Aug 4: 11:30 AM - 11:35 AM
2464 
Contributed Speed 
Music City Center 
Interpretable deep learning is critical in fields such as healthcare, finance, and autonomous systems, where transparency is essential. This study presents a computationally efficient framework integrating Random Fourier Features (RFF) with softmax-weighted kernel density estimation to introduce interpretability in deep learning models. By employing RFF for kernel approximation and refining kernel density estimation, the method provides a structured approach to modeling complex data distributions while maintaining accuracy and efficiency. To assess robustness, a sensitivity analysis is conducted on the dimensionality (D) of the mapped space to evaluate its impact on computational complexity. Additionally, the study examines the integration of multiple kernels within deep learning models, allowing flexible representation of high-dimensional data. This is particularly relevant when distinct feature sets, such as gene collections, require separate kernel representations. The framework's performance is assessed through benchmarking in a conditional density estimation setting using real-world data.

Keywords

interpretable deep learning

machine learning

learning with kernels

random features

nonparametric conditional density estimation 

Main Sponsor

Section on Nonparametric Statistics