Survey Nonresponse: Integrating Perspectives on Design, Analysis and Adjustment

Darcy Steeg Morris Chair
U.S. Census Bureau
 
Jeffrey Gonzalez Discussant
U.S. Bureau of Labor Statistics
 
Jeffrey Gonzalez Organizer
U.S. Bureau of Labor Statistics
 
John Eltinge Organizer
United States Census Bureau (retired)
 
Tuesday, Aug 5: 8:30 AM - 10:20 AM
0414 
Invited Paper Session 
Music City Center 
Room: CC-207D 

Keywords

Data quality 

Applied

Yes

Main Sponsor

Survey Research Methods Section

Co Sponsors

Government Statistics Section
Social Statistics Section

Presentations

Sampling Low-Incidence Populations Under Anticipated Nonresponse

Survey sampling theory on optimal allocation typically assumes 100% response rates. This has led sample designers to resort to ad hoc practices for accommodating anticipated nonresponse, such as computing classic allocations under complete response and then adjusting for anticipated sample loss. In a previous paper (2024), we showed that standard practices may perform quite poorly in some situations. For instance, in an application with a large degree of differential nonresponse, our proposed allocation increased the effective sample size by 25% relative to standard practices.

Here, we extend our previous paper, which assumed that all members of the frame are eligible population members, to situations where eligibility is not known upfront. For instance, it can be challenging to survey low-incidence populations, where population membership is not known in the frame, although auxiliary data are often available for constructing strata with different concentrations (eligibility rates) of the target population. We provide new theory on optimal allocation for low-incidence populations under anticipated nonresponse. We treat eligibility through an analogy to domain estimation, but in contrast with previous theory on sampling for rare populations, nonresponse is included in our formulation. We provide theoretical results and will compare our allocation with existing approaches through an application. 

Keywords

Sampling

Sample design

Sample allocation

Nonresponse

Rare populations 

Co-Author(s)

Michael Elliott, University of Michigan
Jonathan Mendelson, U.S. Bureau of Labor Statistics

Speaker

Michael Elliott, University of Michigan

The Combined Effects of Prior-Wave Item Nonresponse and Perceived Burden on Subsequent-Wave Nonresponse in a Longitudinal Survey

This paper examines the problem of wave nonresponse in longitudinal surveys. Wave nonresponse reduces the sample size needed to estimate trends and changes over time. As a result, it is critical to identify factors contributing to wave nonresponse. This study focuses on two factors related to wave nonresponse based on survey literature: item nonresponse and the perception of burden at a prior wave. According to the response continuum model (Yan and Curtin, 2010), item nonresponse at a prior wave is predictive of nonresponse to the subsequent wave. For example, panelists with a higher level of item nonresponse at Wave 1 will have a higher likelihood of not responding to Wave 2. The response burden framework (Yan and Williams, 2022) predicts a positive relationship between perceived burden and wave nonresponse; that is, panelists with a higher level of perceived burden are less likely to participate in the next round of interviews. I will examine the separate effects of prior-wave item nonresponse and perceived burden on nonresponse to the next wave of interviews. Additionally, I will investigate the combined effects of these two factors on wave nonresponse using data from a longitudinal web survey. The findings will have important practical implications and can be used to inform adaptive designs for longitudinal surveys to reduce wave nonresponse. 

Keywords

longitudinal survey

item nonresponse

response burden

wave nonresponse 

Speaker

Ting Yan, NORC at the University of Chicago

Using New Measures of Selection Bias to Develop General Adjustments for Survey Nonresponse

Rapidly declining response rates in surveys across the world, regardless of the mode of data collection used, have forced survey statisticians and methodologists to consider alternative measures of the quality of survey estimates that allow for the possibility of non-ignorable selection mechanisms. This talk will introduce a number of recently-developed measures of selection bias in common survey estimates that provide survey statisticians with general methods of adjusting for the selection mechanisms associated with survey data sets, whether it arises from sampling or nonresponse, and whether the selection is ignorable or non-ignorable. These new measures are entirely model-based and enable users to perform sensitivity analyses, examining the potential bias in estimates introduced by more complex sampling and nonresponse mechanisms. Important considerations regarding the necessary auxiliary data sources for using the measures and available software implementing calculation of the measures (and corresponding adjustments to survey estimates) will be discussed as well. 

Speaker

Brady West, Institute for Social Research