Bayesian Hierarchical Modeling of Large-Scale Math Tutoring Dialogues
Thursday, Aug 7: 11:05 AM - 11:20 AM
2637
Contributed Papers
Music City Center
We propose a Bayesian hierarchical framework for analyzing large-scale mathematics tutoring dialogues that models cognitive load as latent variables inferred from observable behavioral patterns in educational conversations. Our approach treats response timing patterns and communication modality choices (i.e., sending text vs. images) as observable indicators of underlying cognitive states, with a two-phase experimental design comparing behavioral-only versus content enhanced models incorporating LLM-based understanding classification. Applied to MathMentorDB---5.4 million messages across 200,332 tutoring conversations---our method reveals bidirectional cognitive dependencies where student confusion systematically increases tutor cognitive load, and vice versa. We demonstrate that temporal and modality patterns can reliably indicate latent cognitive states in educational dialogues, with cross-role dependencies providing new insights into collaborative learning dynamics. This work bridges research from education, Bayesian statistics, and natural language processing, providing both methodological innovations for modeling cognitive load in online learning conversations and actionable insights for designing adaptive tutoring systems.
Large Language Models
Educational Data Mining
Bayesian Hierarchical Modeling
Artificial Intelligence (AI)
Data Science
Natural Language Processing
Main Sponsor
Section on Statistics and Data Science Education
You have unsaved changes.