64: Ensuring Model Performance Reliability through a Data-Centric Approach
Monday, Aug 4: 2:00 PM - 3:50 PM
1709
Contributed Posters
Music City Center
Businesses optimize ML models for marginal performance gains, but how often are the business decisions made with full awareness of data quality?
The importance and the level of effort to maintain data quality is not new. However, the industry still lacks a standard way to quantify and monitor data quality. While companies rigorously optimize the models, data issues can quietly undermine performance, introduce bias and can lead to costly mistakes. For example, a leading credit agency's data errors like misreported number of inquiries and tradeline age led to significant financial losses.
This study introduces the Data Reliability Score (DRS), a longitudinal metric for assessing data quality across training and inference. Similar to performance metrics such as Accuracy, and Mean Squared Error, DRS provides continuous monitoring across six key pillars: Lineage, Completeness, Consistency, Bias, Frequency, and Accuracy rooted in Statistical methodologies.
By proactively identifying issues, DRS helps businesses ensure data reliability, preventing failures. Just as low-performing models aren't deployed, data with a low DRS should not be trusted for making business decisions.
Data Reliability
Model Performance
Business Decisions Trustworthiness
Preventing Data Decision Failures
Data Centric AI
Main Sponsor
Business Analytics/Statistics Education Interest Group
You have unsaved changes.