20: Beyond the Black Box: Data Ethics, Explainable AI, and Human Oversight
Chong Yu
First Author
Hawaii Pacific University
Chong Yu
Presenting Author
Hawaii Pacific University
Wednesday, Aug 6: 10:30 AM - 12:20 PM
1326
Contributed Posters
Music City Center
AI systems rely on complex big data algorithms that emerge from training processes rather than explicit human programming. This inherent complexity often makes it difficult, even for experts, to fully understand how these systems generate their outputs. This "black box" can lead to misplaced trust on AI, biased decisions, and unjust social outcomes, raising serious ethical and practical concerns. To address these challenges, this paper explores key strategies for enhancing data transparency, algorithmic transparency, explainability, and interpretability. Explainable AI (XAI) techniques, such as feature importance analysis and counterfactual explanations, can help make AI decision-making more transparent. Additionally, hybrid models that combine black-box AI with interpretable components offer a balance between performance and accountability. However, no technical solution is sufficient on its own. Human oversight remains the most critical safeguard, ensuring that a responsible party is always accountable for AI-driven decisions. This is especially crucial in high-stakes domains such as healthcare, finance, and law enforcement, where AI's impact on human lives is profound.
bias
Transparency
Explainability
Interpretability
Data ethics
AI ethics
Main Sponsor
Justice Equity Diversity and Inclusion Outreach Group
You have unsaved changes.