Enhancing SQL Code Efficiency in Insurance Data with LLMs: A Repeated Measures Approach
Abstract Number:
1005
Submission Type:
Contributed Abstract
Contributed Abstract Type:
Poster
Participants:
Philip Wong (1), Gabriel Cotapos Jr (2), Sean McCarthy (2)
Institutions:
(1) CSAA IG, N/A, (2) CSAA, N/A
Co-Author(s):
First Author:
Presenting Author:
Abstract Text:
This project evaluates the effectiveness of an LLM-driven (Large Language Model) tool for SQL documentation and programming language conversion/SQL code generation. The experiment tests the LLM tool with code samples at three complexity levels-beginner, intermediate, and advanced-under three prompt conditions: minimally defined, moderately defined, and extremely defined. Raters will assess the LLM-generated outputs using a pre-set rubric. The statistical analysis will employ a Repeated Measures ANOVA to determine the impact of the experimental conditions on the tool's performance. Inter-rater reliability will be measured using Cohen's kappa and/or Fleiss' kappa to ensure consistent evaluation.
Keywords:
LLM (Large Language Models)|SQL Code|Documentation|Quality Evaluation|Repeated Measures ANOVA|Inter-Rater Reliability Statistics
Sponsors:
Quality and Productivity Section
Tracks:
Statistical Process Control and Quality Assurance
Can this be considered for alternate subtype?
Yes
Are you interested in volunteering to serve as a session chair?
No
I have read and understand that JSM participants must abide by the Participant Guidelines.
Yes
I understand that JSM participants must register and pay the appropriate registration fee by June 3, 2025. The registration fee is non-refundable.
I understand
You have unsaved changes.