Recent Innovations and Advances in Mixed-Mode Surveys

Emily Berg Chair
 
Cameron McPhee Discussant
SSRS
 
Emily Berg Organizer
 
Brad Edwards Organizer
Westat
 
Tuesday, Aug 5: 8:30 AM - 10:20 AM
0419 
Invited Paper Session 
Music City Center 
Room: CC-209C 

Applied

Yes

Main Sponsor

Journal of Survey Statistics and Methodology

Co Sponsors

AAPOR
Survey Research Methods Section

Presentations

Demographic differences in response to mixed-mode surveys

Mixed-mode surveys, specifically those using mail contacts to offer web and paper response modes, have been increasing in popularity since the early 2000s. As internet access and usage have become more ubiquitous, researchers question whether and when to offer sampled cases a paper questionnaire and whether that answer differs by demographic subgroup. This paper leverages a five-treatment response mode experiment (paper-only, web-only, sequential web-paper, choice, and choice-plus [choice with a promised incentive for responding by web]) that was conducted within a new federally-sponsored, nationally-representative survey. This analysis will focus on whether response rates and the percentage of web response for each treatment varied by demographic subgroups such as age, educational attainment, and household income. These subgroups are related to a person's expected comfort with the internet, which is hypothesized to influence their willingness to respond to surveys by web. This research contributes to the field's understanding of who is likely to respond to a survey under a given response mode treatment. 

Keywords

Mixed-mode surveys

response mode

mode preference

incentive 

Co-Author(s)

Rebecca Medway, University of Maryland
Sarah Heimel

Speaker

Rebecca Medway, University of Maryland

Improving Inferences Based on Survey Data Collected Using Mixed-Mode Designs

Although survey modes often have different measurement properties, standard practice is to pool mixed-mode data, neglecting the potential impacts of mode effects. This study proposes three approaches: the "Testimator" approach, a Bayesian approach, and a model averaging approach. In the "Testimator" approach, we test whether the means and variances of mixed-mode samples are the same. If the null hypothesis is not rejected, we take the average of the estimates; otherwise, we use the estimate in the preferred direction (assumed to be known). In the Bayesian approach, we estimate the effect size and the ratio of variances to determine cutoffs. We then use the probability of these two quantities falling into different cutoff regions as weights to combine estimates. In the model averaging approach, we combine estimates from four models—each assuming either the same or different means and variances across modes—using marginal posteriors as weights. Compared to existing methods, our proposed approaches incorporate testing procedures into inference, leading to more robust results. We evaluate these methods through a simulation study and apply them to the Arab Barometer Survey wave 6 data. 

Co-Author(s)

Trivellore Raghunathan, Institute for Social Research
Michael Elliott, University of Michigan

Speaker

Wenshan Yu

Improving the Efficiency of Outbound CATI As a Nonresponse Follow-Up Mode in Address-Based Samples: A Quasi-Experimental Evaluation of a Dynamic Adaptive Design

This presentation evaluates the use of dynamic adaptive design methods to target outbound computer-assisted telephone interviewing (CATI) in the California Health Interview Survey (CHIS). CHIS is a large-scale, annual study that uses an address-based sample (ABS) with push-to-Web mailings, followed by outbound CATI follow-up for addresses with appended phone numbers. CHIS 2022 implemented a dynamic adaptive design in which predictive models were used to end dialing early for some cases. For addresses that received outbound CATI follow-up, dialing was paused after three calls. A response propensity (RP) model was applied to predict the probability that the address would respond to continued dialing, based on the outcomes of the first three calls. Low-RP addresses were permanently retired with no additional dialing, while the rest continued through six or more attempts. We use a difference-in-difference design to evaluate the effect of the adaptive design on calling effort, completion rates, and the demographic composition of respondents. We find that the adaptive design reduced the mean number of calls per sampled unit by about 14 percent (relative to a modeled no-adaptive-design counterfactual) with a minimal reduction in the completion rate and no strong evidence of changes in the prevalence of target demographics. This suggests that RP modeling can meaningfully distinguish between ABS sample units for which additional dialing is and is not productive, helping to control outbound dialing costs without compromising sample representativeness. 

Keywords

Adaptive design

Paradata

Predictive modeling

Address-based sampling

Phone surveys 

Speaker

Michael Jackson, SSRS