From Bias to Balance: A Data-Driven Approach to Fair Recruitment Practices
Shiyuan Wang
Co-Author
Department of Management, Central Michigan University
Hairu Fan
First Author
Department of Statistics, Actuarial and Data Sciences, Central Michigan University
Hairu Fan
Presenting Author
Department of Statistics, Actuarial and Data Sciences, Central Michigan University
Tuesday, Aug 5: 2:50 PM - 3:05 PM
2137
Contributed Papers
Music City Center
Algorithmic recruiting bias remains a problem, especially when it comes to gender, education, and job category. Measuring bias and creating mitigation methods are crucial as machine learning models increasingly shape hiring. Three datasets are used in this work to study bias at the algorithmic and data levels: COMPAS, Job Salary, and Adult Income. By evaluating demographic representation and its effect on salary and hiring scores, we investigate measurement bias and distribution imbalances at the data level. Disparities are found using correlation analysis, t-tests, and ANOVA. We assess whether ML models predict outcomes differently for various groups at the algorithmic level. While fairness-aware models, such as reweighting, adversarial debiasing, and equalized odds post-processing, help reduce bias while maintaining predictive accuracy, random forest and logistic regression act as baselines. According to preliminary findings, demographic characteristics have an impact on recruitment outcomes, which calls for more research. The trade-off between fairness and accuracy is also examined in this study. Biases in user interactions should be investigated in future research.
Algorithmic Bias
Fairness in Hiring
Machine Learning
Bias Mitigation
Fairness Constraints
Main Sponsor
Section on Statistical Learning and Data Science
You have unsaved changes.