Evaluations and Applications in Nonresponse/Selection Bias

Martha McRoy Chair
NORC at the University of Chicago
 
Tuesday, Aug 5: 10:30 AM - 12:20 PM
4107 
Contributed Papers 
Music City Center 
Room: CC-201B 

Main Sponsor

Government Statistics Section

Presentations

Teaching Practical Significance: Integrating Effect Size Concepts in the Interpretation of Survey-Based Research

Effect sizes are central to statistical analysis, particularly in interpreting large-scale survey and health data where statistical significance alone may not capture real-world relevance. In an era of declining survey response rates and increasing concern about data quality, helping students grasp the concept of effect sizes is essential for building statistical literacy. This talk explores strategies for teaching effect sizes to undergraduate health science students, with a focus on connecting theoretical concepts to applied contexts such as national health and labor surveys. We demonstrate how effect size interpretation complements statistical significance, especially when working with biased or complex survey data. Practical teaching methods, real-world examples, and common instructional challenges will be discussed. By integrating effect size interpretation into curricula that touch on survey data, educators can prepare students to critically evaluate the practical importance of research findings—an essential skill for evidence-based practice and data-driven decision-making. 

Keywords

Effect Sizes


Health Sciences Education


Statistical Literacy


Evidence-Based Practice

Teaching Methodologies 

First Author

ABRAHAM AYEBO, UNIVERSITY OF MINNESOTA ROCHESTER

Presenting Author

ABRAHAM AYEBO, UNIVERSITY OF MINNESOTA ROCHESTER

An Exploration of Current Population Survey Nonresponse

The U.S. Current Population Survey (CPS) produces a wealth of labor force statistics for a variety of demographic groups, including those in The Employment Situation, a monthly release by the Bureau of Labor Statistics that is designated as a Principal Federal Economic Indicator by the Office of Management and Budget. In an environment of depressed response rates, any significant nonresponse bias in the CPS is of great economic consequence. Throughout most of its nearly 80-year history, CPS response rates were above 90 percent, but declines accelerated in the 2010s before the upheaval of the Covid-19 pandemic and into its temporal stasis near 70 percent, increasing the likelihood of nonresponse bias and the potential for adverse national impacts. In this paper, various aspects of CPS nonresponse and weighting are explored, with the intent to identify possible bias, plausible corrections, and critical areas for continued research. 

Keywords

Current Population Survey

CPS

nonresponse

bias 

First Author

Justin McIllece, Bureau of Labor Statistics

Presenting Author

Justin McIllece, Bureau of Labor Statistics

WITHDRAWN Annual Business Survey Nonresponse Bias Analysis

The Annual Business Survey (ABS), conducted jointly by the U.S. Census Bureau and the National Center for Science and Engineering Statistics, provides information on selected economic and demographic characteristics of businesses and business owners by sex, ethnicity, race, and veteran status. The survey also measures research and development for microbusinesses, business topics such as innovation and technology, as well as other business characteristics. This study analyzes the systematic differences among survey respondents and nonrespondents along with the potential bias due to nonresponse. The effects of nonresponse on a survey are of interest because they have potential to add errors to the survey estimates. The analysis studies the association between sample frame variables and key response items, survey response rates, mean differences between the respondents and the nonrespondents on key items, and relative nonresponse bias between the respondents and the nonrespondents. 

Keywords

Respondents, nonrespondents, frame variables, variable associations, relative bias. 

First Author

Dhanapati Khatiwoda, US Census Bureau

WITHDRAWN Effects of Asking a Citizenship Question on Future Survey Response

A growing literature has investigated the effect of adding a citizenship question on survey nonresponse. This project expands on this literature by analyzing the impact of adding a citizenship question on future survey participation. We take advantage of the 2019 Census Test randomized control trial which sent half of the households a questionnaire with a citizenship question, while the rest received one without a citizenship question. We then link individuals in the 2019 Census Test sample housing units to their corresponding housing unit in the 2020 Census using independently constructed administrative records. We address the following questions with the linked data: Did the presence of a citizenship question on the 2019 Census Test affect self-response rates to the 2020 Census? Did the effect vary by demographic characteristics? To what extent did 2020 Census nonresponse follow up operations mitigate these effects? 

Keywords

Citizenship Question

Nonresponse

Survey Fatigue

Census

Administrative Records

Sensitive Questions 

Co-Author

J David Brown, U.S. Census Bureau

First Author

Andres Mira, U.S. Census Bureau

Step on that Rake! A Weighting Approach to Handle Complex Randomization in Natural Experiments

Randomized control trials eliminate confounding and reduce selection bias, requiring simple comparisons to estimate ATEs; however, in the real world, unequal group sizes, unbalanced covariates, and practical difficulties complicate implementation. Analytic strategies often employ statistical controls to adjust for such complications. Instead, we describe our use of weights to adjust for differential ratios across study sites and randomization blocks from a standardized lottery process that allocates housing. Each residential building held its own lottery following the same protocol; however, each building comprised different types of units that each had their own eligibility criteria and corresponding differences in supply relative to demand that produced the treatment (those offered housing) and control (those not offered housing) groups. Applying iterative proportional fitting, or raking, we create weights to address overlapping and intersecting unit type eligibility. Utilizing data at T2, we demonstrate how this plays out for analytic purposes and compare our approach and various other strategies and the differences in ATE that result. 

Keywords

Randomized control trial

natural experiment

weighting

raking 

Co-Author

Elyzabeth Gaumer, NYC Dept. of Housing

First Author

Daniel Goldstein, NYC Dept. of Housing

Presenting Author

Daniel Goldstein, NYC Dept. of Housing