Monday, Aug 5: 10:30 AM - 12:20 PM
6011
Contributed Posters
Oregon Convention Center
Room: CC-Hall CD
Main Sponsor
ENAR
Presentations
There is a lot of hype surrounding AI, some of which is justified. There exists, however, a gap in evidence for AI's efficacy in improving healthcare outcomes. Much of this can be attributed to inadequate design of studies evaluating AI tools, often lacking rigorous outcome assessments. Effect estimates are often based on observational studies, which fail to adequately account for selection bias, leading to unreliable, or outright incorrect, results. For AI to achieve the goal of improving health for patients, the industry must adopt randomized trials, particularly pragmatic RCTs, to robustly test AI tools before implementation. Speed and rigor in research are not mutually exclusive and can responsibly accelerate AI's integration into clinical practice. Reforms in incentives must be made to prioritize rigorous AI research over proliferation of unvalidated models. Physicians must gain modern AI evaluation skills and lead these studies. In this poster, we present our progress in bridging this gap at a large academic medical center. In addition, we present several demonstration studies showing that large scale pragmatic RCTs of AI models can be done and do speed up progress toward imp
Keywords
Artificial Intelligence in Healthcare
Pragmatic Randomized Trial
Machine Learning
Real-time predictive modeling
Abstracts
In the last decade, there have been major advances in decoding from marked point process models that describe the joint activity of many neurons simultaneously, without the need for spike sorting. In this study, we examine entropy-based metrics to analyze the information that is extracted from each observed spike under such clusterless models. In an analysis of spatial coding in rat hippocampus, we compared the entropy reduction between spike sorted and clusterless models both for individual spikes observed in isolation and when the prior information from all previously observed spikes is accounted for. Our analysis demonstrates that low amplitude spikes, which are difficult to cluster and often left out of spike sorting, provide reduced information compared to sortable, high-amplitude spikes when considered in isolation, but the two provide similar levels of information when considering all the prior information available from past spiking. These findings demonstrate the value of our entropy measures and yield new insights into the underlying mechanisms of neural computation.
Keywords
Marked point process models
clusterless decoding
Information measures for spike trains
Abstracts
Our goal is to produce methods for observational causal inference that are auditable, easy to troubleshoot, yield accurate treatment effect estimates, and are adaptable to various datasets. We describe an almost-exact matching approach that achieves these goals by (i) learning a distance metric via outcome modeling, (ii) creating matched groups using the distance metric, and (iii) using the matched groups to estimate treatment effects. Our proposed method uses variable importance measurements to construct a distance metric, making it a flexible method that can be adapted to various applications. We operationalize this method into a safe and interpretable framework to identify optimal treatment regimes in a noisy ICU dataset. In this application, we face challenges including missing data, inherent stochasticity, and the critical requirements for interpretability and patient safety. Using our approach, we match patients with similar medical and pharmacological characteristics, allowing us to construct an optimal policy via interpolation. Our findings strongly support personalized treatment strategies based on a patient's medical history and pharmacological features.
Keywords
causal inference
optimal treatment regime
machine learning
variable importance
matching
medicine
Abstracts
Longitudinal biomarker data and health outcomes are regularly collected in numerous epidemiology studies for studying the prediction of biomarker trajectories to health outcomes, which informs health interventions. Many existing methods that connect longitudinal trajectories with health outcomes put their attention mainly on mean profiles, treating variabilities as nuisance parameters. However, variabilities may also carry a substantial information. We develop a Bayesian joint modeling approach to study the association between mean trajectories along with variabilities in longitudinal biomarker and survival times. To model the longitudinal biomarker, we adopt linear mixed effects model and allow individuals to have their own variabilities. Following that, we model the survival times by incorporating random effects and variabilities as predictors through threshold regression, also known as "first-hitting-time model" which allows for non-proportional hazards. We apply the proposed model to data from Study of Women's Health Across the Nation and reveal that higher mean values and variabilities of Follicle-stimulating hormone are associated with an earlier age of final menstrual period
Keywords
Longitudinal Biomarker
Joint modeling
Survival outcomes
Threshold regression
Individual-level variability
Study of Women's Health Across the Nation
Abstracts