Evaluating the Scientific Foundations of Latent Fingerprint Comparisons

Michael Rosenblum Co-Author
Johns Hopkins University, Bloomberg School of Public Health
 
Amanda Luby Co-Author
Carleton College
 
Maria Cuellar Co-Author
University of Pennsylvania
 
Michael Rosenblum Speaker
Johns Hopkins University, Bloomberg School of Public Health
 
Monday, Aug 4: 10:35 AM - 10:55 AM
Topic-Contributed Paper Session 
Music City Center 
Multiple reviews, including those by the National Academy of Sciences (2009), PCAST (2016), and AAAS (2017), have concluded that forensic latent fingerprint comparison lacks empirical validation. Scientific validity requires rigorously designed studies of examiner performance: accuracy, repeatability, and reproducibility. We performed a systematic review of black-box studies evaluating latent fingerprint comparisons and found that all suffer from fundamental design and statistical flaws. These flaws (including inadequate sample sizes, non-representative samples and test conditions, improper handling of inconclusives, and flawed error rate estimation) render the studies invalid for establishing the field's scientific validity. Furthermore, these studies omit key elements of real casework, such as database searches (AFIS), contextual bias, and real-world complexity. As a result, error rates of latent fingerprint examiners remain unknown, and claims of reliability and accuracy lack scientific support. We offer recommendations for future studies to ensure valid experimental design, statistical analysis, and real-world relevance, advancing the field toward scientific rigor and admissibility.