Thursday, Aug 7: 8:35 AM - 10:20 AM
4212
Contributed Papers
Music City Center
Room: CC-201B
Main Sponsor
Biometrics Section
Presentations
This work introduces a novel method for extracting functional principal components from sparse, univariate functional data, commonly seen in longitudinal studies with irregular sampling and measurement errors. The approach utilizes basis expansion for estimation and includes an approximate Generalized Cross-Validation (GCV) for optimally selecting the number of basis functions and principal components. Crucially, the methodology preserves essential mathematical properties: eigenfunction orthogonality, eigenvalue positivity, and a positive estimate of the error variance. Using conditional estimation, it then recovers complete individual trajectories and principal scores across the domain. Simulation studies demonstrate the method's superior performance in estimating eigenfunctions, eigenvalues, and error variance compared to existing techniques. Its practical utility is highlighted through an application to CD4 cell count data from the Multicenter AIDS Cohort Study.
Keywords
Functional Data Analysis
Longitudinal Data
Functional Principal Components
Modified Gram-Schmidt Orthonormalizing
Basis Functions
Maximum Likelihood Estimate
Pretrial risk assessment instruments are algorithmic tools widely used in the US Justice System. They predict the risk of future pretrial misconduct outcomes, such as failure to appear or new criminal activity. They inform decision-making by deriving risk scores to aid judges' decisions on conditions for pretrial release. This quantitative study examined the effectiveness of pretrial risk assessment instruments in mitigating risk by comparing misdemeanor bails assigned in two southeastern US counties, Mecklenburg and Wake in NC. The data included five years of misdemeanor arrests between 2018 and 2021. Wake County did not use any pretrial risk assessment instrument during this period. Mecklenburg County used the Public Safety Assessment. When comparing data for misdemeanor offenses from these two counties, there was a statistically significant difference in bail assignments. There was an increased use of money bail as a means for pretrial release in Mecklenburg County compared to Wake County. Race and gender were significant influences for bail greater than $5000. This finding implies that the use of PRAI might not lead to a reduction in the use of financial means of release.
Keywords
Pretrial
Misdemeanor
Bail
Public Safety Assessment
Risk Instrument
Criminal Justice
Food is an essential part of daily life, providing vital nutrition to the mind and body. The lack of an essential resource can significantly hinder one's quality of life. In the South, "The state of Tennessee measures at 11.9% food insecurity" (Durnell, 2023, p.1). This statistic reveals that several people cannot acquire healthy food; in effect, decreasing their quality of life. Clarksville, a growing rural town in Tennessee, struggles with the same issue. In this study, we seek to understand what factors classify food-insecure populations in Clarksville, TN, and how they compare on a smaller and larger scale. Using data from the United States Census Bureau, we will analyze trends, formulate conclusions from Clarksville data, and compare these results with Murfreesboro, TN, and USA data. The data contains food stamp information counts and percentages for different populations. This research aspires to aid food pantries and food-insecure populations in growing rural areas similar to Clarksville, TN. This project sheds light on remembering the impoverished and those who struggle to obtain healthy food in a progressing society.
Keywords
Food Insecurity
Tennessee
Classification
Growing
Rural
Statistics
This systematic review explores the methodological challenges in longitudinal mental health research, focusing on study design characteristics, data handling approaches, and statistical methods. By synthesizing evidence from studies published between 1990 and 2024, the review aims to provide practical recommendations for addressing issues such as attrition, missing data, and serial correlations, ultimately contributing to more reliable and interpretable analyses in mental health research. Once coding is complete, the data will be synthesized narratively, with a focus on identifying patterns and trends in how longitudinal mental health studies address methodological challenges. It will offer practical recommendations for applied researchers, contributing to improved reliability and interpretability of longitudinal analyses in mental health research.
The findings are expected to offer valuable insights into longitudinal mental health study designs and characteristics, serving as a starting point for future methodological studies addressing the existing complexities in the field.
Keywords
Longitudinal mental health
Methodological complexities
Serial correlation
Co-Author(s)
Yizou Lu, University of Wisconsin-Milwaukee
Evan David Heyden, University of Wisconsin-Milwaukee
Michaela Braun, University of Wisconsin-Milwaukee
Sneha Gandla, University of Wisconsin-Milwaukee
Manthan Mahesh Mehta, University of Wisconsin-Milwaukee
Brianna E Gonzalez, University of Wisconsin-Milwaukee
Jennifer Landeta Vidal, University of Wisconsin-Milwaukee
Liliana Isabel Kasta, University of Wisconsin-Milwaukee
First Author
Laleh Jamshidi, University of Wisconsin-Milwaukee
Presenting Author
Laleh Jamshidi, University of Wisconsin-Milwaukee
In life testing studies, such as industrial reliability testing or clinical trials, it is critical to use a Type I censoring strategy that limits test time while delivering adequate information about the life characteristics. This paper investigates an ideal Type I censoring scheme utilizing Shannon information gain, which is a measure of the information collected from the experiment. We use a Bayesian approach to model life tests, focusing on optimizing expected information gain during the design process. However, due to the complexity of the calculations, we use the Metropolis-Hastings algorithm to approximate the expected Shannon information gain at different censoring times and obtain the optimize setting by using an augmented probability simulation and monotone smoothing process . Our goal is to choose an optimal censoring time that produces a reasonable degree of information gain, usually about 90% of the maximum achievable value. We apply the methodology to the testing of hardened steel specimen data, demonstrating the usefulness of the suggested algorithm in determining the optimal censoring time for life tests with Type-I censoring.
Keywords
Shannon Information Gain
Metropolis-Hastings Algorithm
Type-I Censoring
Augmented probability simulation
We introduce non-Gaussian regression models with serially correlated errors and propose an estimation method for the models. We incorporate an additional parameter into the regression models in order to induce normality and then, simultaneously estimate all the parameters. To this end we explore a posterior estimation method on the wavelet domain. Performances are assessed using simulation studies.
Keywords
Non-Gaussian
Posterior estimation
Regression
Serially Correlated
Wavelet Transform