AI-Assisted Conversational Interviewing: Effects on Response Quality and User Experience

Soubhik Barari Co-Author
 
Brandon Sepulvado Speaker
 
Wednesday, Aug 6: 2:25 PM - 2:45 PM
Topic-Contributed Paper Session 
Music City Center 
Standardized survey interviews are straightforward to administer but fail to address idiosyncratic data quality issues for individuals. Meanwhile, conversational interviews can enable personalized interactions and richer data collection, but cannot easily scale and allow for quantitative comparison. To bridge this divide, we introduce a framework for AI-assisted conversational survey interviewing. Among other things, AI 'textbots' can dynamically probe respondents and live code open-ended responses with real-time respondent validation. To evaluate these capabilities, we conducted an experiment on a conversational AI platform, randomly assigning participants to textbots performing probing and coding on open-ended questions. Our findings show that, even further fine-tuning, textbots perform moderately well in live coding and can improve the specificity, detail, and informativeness of open-ended responses. These gains come with slight negative impacts to user experience as measured by self-reported evaluations and respondent attrition. Our investigation demonstrates the potential for AI-assisted conversational interviewing to enhance data quality for open-ended questions.