Monday, Aug 4: 10:30 AM - 12:20 PM
4050
Contributed Posters
Music City Center
Room: CC-Hall B
Main Sponsor
ENAR
Presentations
Work is central to human life, shaping various aspects of individual well-being. Understanding the global dynamics of work is critical for informing policies and workplace practices that address disparities and promote greater well-being worldwide. This study analyzes Gallup World Poll data from 314,803 working adults across 145 nations (2020–2023), offering one of the most comprehensive global assessments of work enjoyment to date. We identify global trends in work enjoyment, examining its variation across sociodemographic groups. We assess two potential drivers of work enjoyment: having choices in one's work and perceiving one's work as valuable to others. Finally, we examine the relationship between work enjoyment and broader life satisfaction metrics, including overall life evaluation (Cantril Scale) and daily emotional experiences. Among our findings, work enjoyment emerges as the strongest driver of life evaluation and emotional well-being, compared to choice and contribution. Furthermore, experiencing enjoyment, choice, and contribution significantly reduces disparities in well-being across specific groups including income, education, sex, region, and marital status.
Keywords
work enjoyment
wellbeing
life evaluation
disparities
Gallup World Poll
public health
Co-Author(s)
Alex Dahlen, New York University, School of Global Public Health
Alden Lai, Assistant Professor of Public Health Policy and Management
First Author
Taehyo Kim, New York University
Presenting Author
Taehyo Kim, New York University
West Virginia has been identified as a colorectal cancer hotspot region, due to its rising incidence and mortality among younger men. A healthy diet has been found to lower the risk of colorectal cancer. Yet, limited access to healthy food sources introduces an environment for poor diet and further exacerbates disparities by neighborhood. Many studies focus on a narrow set of food sources (e.g., fast food or grocery stores), limiting our full understanding of neighborhood food environments. A comprehensive view of both healthy and unhealthy food sources can improve our ability to assess their impact on health disparities. Using a high-dimensional framework for food sources, we implement model-based clustering techniques to generate neighborhood food environment profiles. Leveraging data from the National Neighborhood Data Archive (NaNDA) and the U.S. Census, we apply this model to West Virginia to identify areas within the state that may limit opportunities for a healthy diet and its impact on colorectal cancer risk.
Keywords
model-based clustering
colorectal cancer
health disparities
neighborhood-level data
food access
Over the past few decades, various advanced methods have been developed to facilitate information integration. These methods leverage summary statistics (e.g., point estimates) from multiple sites or studies, which can be readily extracted from existing publications or efficiently shared via correspondence, without requiring the sharing of raw individual-level data. Despite these advancements, existing methods may not be directly applicable to the varying coefficient model (VCM)-a semi-parametric framework that allows certain covariate effects to vary with the values of another covariate. This paper addresses this gap by introducing a comprehensive information integration framework for VCM. This new framework (1) enables computationally efficient integration of information from a different model type (e.g., generalized linear models), (2) does not assume homogeneous data distributions across sites or studies, and (3) supports variable selection. Extensive simulations validate the proposed method, demonstrating substantial variance reduction with minimal estimation bias in various cases. Finally, we apply this method to two distinct datasets.
Keywords
Empirical likelihood
Information integration
Real-world data
Varying coefficient model
Variable selection
Co-Author
Chixiang Chen, University of Maryland School of Medicine
First Author
Jia Liang, St. Jude Children's Research Hospital
Presenting Author
Jia Liang, St. Jude Children's Research Hospital
Mediation analysis examines the pathways between predictors and outcomes through intermediate variables. We extend conventional mediation analysis by incorporating cumulative impact of predictors on outcomes over time in longitudinal processes. We derive the cumulative direct and indirect effects of predictors on outcomes from multiple time points, allowing for multiple independent mediators at each time point. Specifically, our proposed model accounts for the effects of predictors and outcomes from all previous time points as mediators for the outcome variable at a given time point. We evaluate cumulative indirect effects and their standard errors using three approaches: exact form, the delta method, and the bootstrap procedure. We demonstrate that the indirect effect estimators from least-squared method are unbiased under certain conditions, with the unbiasedness illustrated in simulation studies. We show that three types of standard error estimates are numerically similar, with the bootstrap method recommended due to the complexity of the closed forms of the other two methods.
Keywords
longitudinal mediation analysis
multiple mediators
Confounding can lead to spurious associations. Typically, one must observe confounders in order to adjust for them, but in high-dimensional settings, recent research has shown that it becomes possible to adjust even for unobserved confounders. The methods for carrying out these adjustments, however, have not been thoroughly investigated. In this study, we explore various forms of unobserved confounding and assess how they introduce bias and variability into the data. We quantify the magnitude and structure of these effects by examining the ratios between bias, signal, and noise. We then construct various scenarios to demonstrate the impact of the amount and complexity of unobserved confounding on the performance of competing methods, including the LASSO, principal components LASSO (PC-LASSO), and penalized linear mixed models (PLMMs). Our findings highlight the importance of adjusting for unobserved confounding. In addition, we find that PLMM approaches are more robust than PC-LASSO in handling complex confounding structures while preventing the inclusion of spurious features into the model.
Keywords
Penalized Regression
Linear mixed models
Unobserved confounding
A team of statisticians streamlined the codebook creation and data deidentification process for the data repository of the National Institute of Child Health and Human Development Data and Specimen Hub (NDASH) by using SAS macros. NDASH requires projects to submit deidentified datasets with a formatted Excel codebook. Historically, projects created individual submission, causing inefficiencies and inconsistencies. The streamlined process allows projects to input project specific information to quickly generate the submission files resulting in standardization, reduced time and errors, and a simplified process that still accommodates project differences. The key is the creation of a Master File Excel spreadsheet with all variable information. This spreadsheet becomes the driver for a series of macros that deidentify data and output the codebook. The macro approach saves time otherwise spent on manual coding. Additionally, minor edits are easily implemented in the spreadsheet without needing to dig through code, and then macros can simply be rerun. To date seven projects have utilized these macros for submission, and each has saved substantial time compared to the old process.
Keywords
Master File
macros
data repository
DASH codebook
data deidentification