Opening the Black-Box: Novel Approaches and Lessons Learned with Explainable Machine Learning

Abstract Number:

1585 

Submission Type:

Topic-Contributed Paper Session 

Participants:

Katherine Goode (1), Daniel Ries (1), Erin Acquesta (1), Amit Dhurandhar (2), Elisabeth Moore (3), Lucas Mentch (4), Michael Smith (1), Katherine Goode (1)

Institutions:

(1) Sandia National Laboratories, N/A, (2) IBM, N/A, (3) Pacific Northwest National Laboratory, N/A, (4) University of Pittsburgh, N/A

Chair:

Erin Acquesta  
Sandia National Laboratories

Co-Organizer:

Daniel Ries  
Sandia National Laboratories

Session Organizer:

Katherine Goode  
Sandia National Laboratories

Speaker(s):

Amit Dhurandhar  
IBM
Elisabeth Moore  
Pacific Northwest National Laboratory
Lucas Mentch  
University of Pittsburgh
Michael Smith  
Sandia National Laboratories
Katherine Goode  
Sandia National Laboratories

Session Description:

Focus: Machine learning (ML) models are powerful predictors used to solve problems in seemingly every application space. Unlike traditional statistical models, ML models are often not interpretable by construction. That is, the mathematical relationships between model inputs and predictions cannot be expressed in an understandable manner in the context of the application. The lack of transparency of ML models can have severe implications in high consequence application spaces such as national security, medical diagnoses, and loan approvals. Explainable ML techniques have been developed with the intention of providing insight to these black-box models. However, in order to trust explainability methods, assessment of these methods is also imperative for use in high consequence applications. Presenters in this session will discuss novel developments in explainability for ML, present applications of the approaches, share cautionary tales and lessons learned, and discuss paths forward.

Content and Session format: The session will have 5 talks. The proposed talk titles are as follows:

Dennis Wei and Amit Dhurandhar (co-authors; main presenter TBD): "AI Explainability 360: An Open-Source Toolkit of Explanation Techniques and its Impact"
Elisabeth Moore: "Trustworthy Explanations: Evaluating Strengths and Weaknesses of Current XAI Techniques"
Lucas Mentch: "Meaning What? Some Cautionary Tales of Variable Importance."
Michael Smith: "Alternative Explanation Methods: Geometric Explanations of Prediction Uncertainty"
Katherine Goode: "A Proposed Framework for Evaluating the Maturity Level of Explanations"

Timeliness: Artificial and intelligence (AI) is the new buzzword in the business world, and seemingly everyone wants a part of it. Although recent advances in AI show promise, we must recognize that some decisions are too important to leave to an algorithm unchecked. Explainability methods for ML and AI are being developed and assessed by statisticians and computer scientists to help users understand how their models work and where they will fail.

Appeal: At JSM 2023 in the late breaking session "ChatGPT: Job-Killer, Flash in the Pan, or a Statistician's Best Friend?", panelists argued that statisticians have missed opportunities with machine learning in the past, and that we need to be more involved in this latest wave of AI. Interpretable models are the bread and butter of statistical modeling, so it a great place for statisticians to get involved is in the explainability of AI and ML models. This session will help foster this collaboration between the disciplines.

Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

Sponsors:

Section on Statistical Graphics 3
Section on Statistical Learning and Data Science 1
Section on Statistics in Defense and National Security 2

Theme: Statistics and Data Science: Informing Policy and Countering Misinformation

Yes

Applied

Yes

Estimated Audience Size

Medium (80-150)

I have read and understand that JSM participants must abide by the Participant Guidelines.

Yes

I understand and have communicated to my proposed speakers that JSM participants must register and pay the appropriate registration fee by June 1, 2024. The registration fee is nonrefundable.

I understand