Macro for Nonnormal Data, Meta Analysis and Combining SAS and R for Clinical Trials, Mixed Effect Models and Clinical Trial Experience of Screening Patients with Dementia

Quan Zhou Chair
BeiGene
 
Monday, Aug 5: 2:00 PM - 3:50 PM
5067 
Contributed Papers 
Oregon Convention Center 
Room: CC-G131 
Multiple imputation (MI) is a popular experimental procedure for handling missing data, while robust regression implements novel empirical evidence for the decision regarding non-normal or normal variables. By down-weighting the influence of outliers, Robust regression minimize residual impact on the coefficient estimates. This macro combines two methods together to accurately capture this relationship in continuous efficacy laboratory data and protect against potential non‐normality/outliers in the original or
imputed dataset. Another speaker will focus on Patient reported outcomes, such as quality of life (QOL), that are commonly collected in oncology studies, and are increasingly common in meta-analyses. Work is based on a meta-analysis data set where studies of QOL among cancer patients receiving radiation therapy have reported been reported longitudinally as correlated continuous
repeated measures. Results based on a simulation study will demonstrate that bias and coverage problems tend to arise as the proportion of studies reporting medians increases and as the underlying distributions become more skewed. Next speaker will focus on a functional programming style approach to Monte Carlo sample size determination analysis in R and
SAS. An example from Ophthalmology with a SAS code for analysis done at Alcon would be shared. Final Speakers will share knowledge of certain mixed effect models, Longitudinal meta-analysis and multimodel display for clinical trial to screen Dimentia patients will also be shared.

Main Sponsor

Section for Statistical Programmers and Analysts

Presentations

An effective macro to analyze non-normal data with missing values

Multiple imputation (MI) is a popular experimental procedure for handling missing data, while robust regression implements novel empirical evidence for the decision regarding non-normal or normal variables. By down-weighting the influence of outliers, Robust regression minimize residual impact on the coefficient estimates. A macro developed by the Fortrea Company, combines two methods together to accurately capture this relationship in continuous efficacy laboratory data and protect against potential non‐normality/outliers in the original or imputed dataset. This paper provides an example programming procedure and suggests possible improvements in the macro based on the author's experience. 

Keywords

Multiple imputation

missing data

non-normal

Robust regression 

View Abstract 1847

First Author

Fengzheng Zhu

Presenting Author

Fengzheng Zhu

Risk-assessment of R packages in a Biopharmaceutical Regulatory Setting.

This contribution reflects on framework and tools by the R Validation Hub for risk-based assessment of R packages within validated infrastructure.
The R Validation Hub is a cross-industry initiative, led by approximately 10 organizations with frequent involvement from health authorities. The R validation Hub is funded by the R Consortium and has the mission to support the adoption of R within regulated industries, with an emphasis on biopharmaceuticals.
We will discuss the framework for the risk-based assessment of R packages, that has been utilized by key pharma companies across the industry. We will also showcase the {riskmetric} R package, that evaluates the risk of an R package using a specified set of metrics and validation criteria, and the {riskassessment} app, that augments the utility of the {riskmetric} package with a Shiny app front end. Lastly, we will outline a prototype of a technical framework for a 'repository' of R packages with accompanying evidence of their quality and the assessment criteria. 

Keywords

R Package

Risk assessment

Open-source software

Regulated environment

validation 

View Abstract 2338

Co-Author(s)

Antal Martinecz, Certara
Doug Kelkhoff, Genentech

First Author

Juliane Manitz

Presenting Author

Juliane Manitz

Combining R and SAS with tidy functional programming for clinical trial design

We propose a functional programming style approach to Monte Carlo sample size determination analysis in R and SAS. Our proposed workflow centers around the development of a study-specific R package used to conduct the analysis, exporting functions for simulating data, modeling data, and summarizing results. Doing so has numerous advantages–R packages have a predictable structure, come with powerful documentation and unit testing tools, are portable, and are easy to collaborate on. In lieu of more standard functional tools such as the lapply() family or the {purrr} library we recommend the use of the exported functions with parallelizable rowwise operations on nested tibbles from the {tidyr} package, extending the notion of "tidy" data to the "tidy" organization of simulation data. We also discuss a functional style approach to modeling data in SAS via macros for designs involving the use of SAS-specific tools such as PROC MIXED, demonstrating a methodology for using SAS and R in tandem. We conclude with an example from ophthalmology, showcasing the development and use of an R package and SAS code for such an analysis at Alcon. 

Keywords

Functional Programming

Clinical Trial Design

Monte Carlo Simulation

R

SAS 

View Abstract 3460

Co-Author

Mary Rosenbloom, ALCON Laboratories Inc

First Author

James Otto, Alcon

Presenting Author

James Otto, Alcon

Multimodal deep learning algorithm as a tool for Dementia clinical trial patient disease screening

Dementia is a complex disease due to various etiologies. New multimodal deep learning algorithms were developed to improve the diagnosis of dementia into different categories of normal cognition (NC), mild cognitive impairment (MCI), AD, and non-AD dementias (nADD).
One of the core difficulties in implementing Dementia clinical trials, especially the AD trials lies in the diagnostic ambiguity of Alzheimer's, where symptomatic overlap with other cognitive disorders often leads to misdiagnosis. Dementia clinical trials usually have high screen failure rates and burden for the sponsor for the manual inclusion screening verification.
In our work, we explore the use of this multimodal deep learning algorithm as a tool for the clinical trial patient disease screening verification to reduce the cost of the clinical study while improving the quality. We will present the accuracy assessment of the deep learning algorithm compared to the neurologist assessment based on the sensitivity, specificity, PPV and NPV in the real-world clinical trial setting. We will explore the optimal set of input variables used for the algorithm to balance the accuracy and cost and time of the medical exams. 

Keywords

multimodal deep learning algorithm

Dementia clinical trial

disease screening

sensitivity, specificity, PPV and NPV

real world 

View Abstract 2661

Co-Author(s)

Ying Liu, Princeton Pharmatech LLC
Polina Vyniavska, Princeton Pharmatech LLC

First Author

William Jin, West Windsor - Plainboro High School North

Presenting Author

William Jin, West Windsor - Plainboro High School North

Longitudinal Meta-Analysis Estimates when Mean from Median Estimation is Necessary

Patient reported outcomes, such as quality of life (QOL), are commonly collected in oncology studies, and are increasingly common in meta-analyses. We currently have a meta-analysis data set where studies of QOL among cancer patients receiving radiation therapy have reported been reported longitudinally as correlated continuous repeated measures. While most studies of QOL have reported means and standard deviations, some studies have reported medians with ranges, interquartile ranges (IQR), or both. It is unknown how existing methods for mean from median estimation may affect results of a meta-analysis when data are from longitudinal studies reporting correlated repeated measures. In a simulation study, we varied the underlying distributions, numbers of studies and subjects within studies, data reported (medians with range, IQR or both), and proportion of studies reporting medians. Results show that bias and coverage problems tend to arise as the proportion of studies reporting medians increases and as the underlying distributions become more skewed. 

Keywords

Meta-Analysis

Simulation

Quality of Life

Cancer

Longitudinal 

View Abstract 3040

Co-Author(s)

Lynette Smith, University of Nebraska Medical Center
Christopher Wichman, University of Nebraska Medical Center

First Author

Harlan Sayles, UNMC

Presenting Author

Harlan Sayles, UNMC