Survey Methodology

Srijeeta Mitra Chair
University of Maryland College Park
 
Monday, Aug 4: 2:00 PM - 3:50 PM
4080 
Contributed Papers 
Music City Center 
Room: CC-106C 

Main Sponsor

Survey Research Methods Section

Presentations

An individualized inference of social mobility via generative analysis of discrete data

Inspired by the concepts of individualized recommendation and personalized medicine, we propose an individualized inference method for social science to estimate intergenerational mobility-i-mobility-in American society. Leveraging the generative analysis framework introduced by Liu et al. (2021) and a kernel smoothing metric for similarity scoring, our approach enables the tracking of changes in subject profiles defined by specific combinations of characteristics. This, in turn, provides insights into social changes at the profile level or near the individual level. Additionally, our method addresses key estimation challenges posed by small sample sizes and the presence of mixed data in social surveys. 

Keywords

Intergenerational social mobility

Generative method

Personalized inference

Mixed data analysis 

Co-Author(s)

Dungang Liu, University of Cincinnati
Yuan Jiang, Oregon State University

First Author

Jiawei Huang, Carl H. Lindner College of Business, University of Cincinnati

Presenting Author

Jiawei Huang, Carl H. Lindner College of Business, University of Cincinnati

Measuring the Relative Influence of Social Desirability Bias and Risk of Disclosure on Response Bias

In this paper we present results from an experimental test of the non-verbal response card (NVRC) and the verbal touch card. The NVRC is a two-sided card that allows respondents to nonverbally respond to questions without the interviewer knowing the actual response. The touch card is a one-sided card that allows respondents to nonverbally respond to questions, but with the actual response known to the interviewer. The NVRC, the touch card, and the standard verbal response method were randomly assigned to 2,544 youth in a 2017 Burkina Faso health survey. The NVRC and touch card were used for sensitive questions on sexual experience, risk taking, trauma and mental health. We examine the response patterns for 9 questions on risky behavior and sexual experience. Comparisons of the three response methods allow us to separate the relative effects of social desirability and risk of disclosure on response bias. We find no significant differences in responses between the verbal and touch card methods, but significantly higher reports of a number of risky behaviors and sexual experiences among respondents who used the NVRC method compared to touch card and verbal respondents combined. 

Keywords

Social desirability bias

Response bias 

First Author

David Lindstrom, Brown University

Presenting Author

David Lindstrom, Brown University

Novel TRUMP for Precision, Efficiency and Complex Estimation (PEACE) Strategies in Survey Sampling

Singh and Sedory (2024: JSM 2024 Proceedings: Future of Tuned Ratio Unbiased Mean Predictor (TRUMP) with the Unified Scrambling Approach (USA)) have pointed out that the TRUMP with the USA has a wider scope of research in the field of survey sampling for dealing with sensitive issues. In this presentation, we will show that a novel Tuned Robust Unbiased Model Predictor (TRUMP) is expected to bring Precision, Efficiency and Complex Estimation (PEACE) strategies into practice when dealing with general linearly optimized best estimators (GLOBE) of a population total. The novel TRUMP model is robust in the face of many situations that could arise in real world surveys. Several situations requiring the development of TRUMP Care coefficients for new types of TRUMP Cuts when utilizing complex designs will be discussed. A method to make a great adjustment (MAGA), utilizing golden ratio, to the TRUMP Care coefficients will be introduced to usher in the golden age of TRUMP methodology. If time permits, ideas of chain-type and grafted TRUMP Cuts for complex designs will be touched on. New theoretical developments and results of a recent simulation study will be reported. 

Keywords

Jackknifing

Calibration

TRUMP Cuts

TRUMP Care coefficient

TRUMP Subsidy

Horvitz-Thompson Estimator 

Co-Author

Stephen Sedory, Texas A & M University - Kingsville

First Author

Sarjinder Singh, Texas A&M University-Kingsville

Presenting Author

Sarjinder Singh, Texas A&M University-Kingsville

Trajectory analysis with attrition weights using a finite mixture model and Bayesian framework

Numerous approaches for trajectory analyses are currently available. None, to our knowledge, are flexible enough or able to incorporate weights that often accompany survey data. Thus, we introduce a novel approach for trajectory analysis for longitudinal survey data with a continuous outcome while incorporating attrition weights using a finite mixture model. Under the Bayesian framework, we use the birth-death process for clustering and allow trajectories to follow linear, quadratic, and cubic patterns. Simulation studies were used to evaluate the model's performance, and the results show high sensitivity, specificity, and accuracy with respect to clustering assignment. We applied the approach to the Together 5,000 (T5K) study, a U.S. nationwide, internet-based cohort with a goal to identify modifiable factors associated with HIV acquisition and PrEP uptake among HIV-vulnerable populations. Participants completed surveys and HIV tests at baseline and following four years. Attrition weights for each follow-up have previously been calculated. We identified different patterns of drug use measured by ASSIST scores over 4 years while accounting for participant attrition. 

Keywords

Weighted trajectory analysis

Attrition weights

Finite mixture model

Bayesian 

Co-Author(s)

Drew Westmoreland, University of Florida
Christian Grov, Department of Community Health and Social Sciences, City University of New York
Hongmei Zhang, University of Memphis
Meredith Ray, School of Public Health, University of Memphis

First Author

Samia Sultana, University of Memphis

Presenting Author

Samia Sultana, University of Memphis

Using a non-response survey to assess the risk of bias in large-scale educational assessments

Large-scale educational assessments (LSA) are widely used to derive probabilistic knowledge about the distribution of certain characteristics or outcomes in an educational system. This knowledge is usually used to inform various aspects of education policy in both national and international contexts. A fundamental difficulty these studies face in justifying their conclusions is the inferential problems posed by non-response, which raises concerns about sampling and non-sampling errors in the inference.
We propose using a "non-response survey" (NRS) to investigate the risk of non-sampling errors in the inferences drawn by these studies. The NRS collects information on a subset of variables that are deemed central to the study and is applied to sampled units that did not participate in the LSA. To assess the risk of non-sampling errors, the distribution of each key variable is compared between the information gathered by the LSA and the NRS. Besides discussing the methodological merits of such an analysis, we present the results of a feasibility study in which we conduct an NRS in two countries within the context of the OECD's Teaching and Learning International Study (TALIS). 

Keywords

Non-response

Non-response bias

Total survey error

Identifiability of population parameters

Large-scale educational assessment 

Co-Author(s)

Marlen Holtmann, IEA Hamburg
Sabine Meinck, IEA Hamburg

First Author

Diego Cortes, IEA Hamburg

Presenting Author

Diego Cortes, IEA Hamburg