Automatic recognition of heart disease based on phonocardiogram
Ting-Yu Yan
Co-Author
Deaprtment of Applied Mathematics, National Sun Yat-sen University
Ming-chun Yang
Co-Author
Department of Pediatrics, E-Da Hospital, Kaohsiung, Taiwan,
Wei-Chen Lin
Co-Author
Department of Medical Research, E-DA Hospital
Meihui Guo
First Author
National Sun Yat-Sen University
Meihui Guo
Presenting Author
National Sun Yat-Sen University
Wednesday, Aug 6: 8:40 AM - 8:45 AM
1034
Contributed Speed
Music City Center
Heart sound recognition is crucial for early cardiovascular disease detection, but auscultation alone often leads to diagnostic challenges, even for experienced clinicians. To address this, we propose a convolutional recurrent neural network (CRNN) model combined with machine learning, utilizing MFCC, SFTF, and Deep Scattering features. Applied to 512 datasets from E-Da Hospital, our CRNNA + LightGBM model achieved 92.2% accuracy (specificity: 96.2%, sensitivity: 88%), outperforming physicians by 9.7% in accuracy and 24% in sensitivity.
Using self-attention mechanisms, we visualized the model's focus areas, which closely matched physicians' auscultation regions, demonstrating its ability to act as a diagnostic proxy. Validation on the 2016 PhysioNet/CinC Challenge database further confirmed the model's robustness, achieving 95% accuracy (specificity: 93%, sensitivity: 98%).
CRNNA
Deep scattering
Heart sound classification
Light GBM
MFCC
PCG
Main Sponsor
Section on Statistical Learning and Data Science
You have unsaved changes.