Tuesday, Aug 5: 10:30 AM - 12:20 PM
4098
Contributed Papers
Music City Center
Room: CC-101C
Main Sponsor
Lifetime Data Science Section
Co Sponsors
Lifetime Data Science Section
Presentations
Testing the treatment effect on recurrent event when terminal event exist has been challenging in clinical studies. Traditional methods on cumulative frequency unfairly disadvantage longer survivors as they tend to experience more recurrent events. The methods like the While-Alive loss rate ratio test (WA) tried to resolve this issue, and it performs well regarding the type I error and power when recurrent event rate holds constant over time. However, if the constant-rate assumption is violated, WA can exhibit inflated type I error and inaccurate effect size estimation. To overcome this pitfall, we propose a Proportional Marginal Rate Structural Model assisted test (PMRSMT), in the framework of separable treatment effect for recurrent and terminal events, respectively. In the simulation study, we show that PMRSMT has controlled type I error and comparable power as WA, even when the recurrent event rate varies over time. We further illustrate the application of PMRSMT to compare postoperative adverse events under interventions with different mechanical circulatory support devices in the Interagency Registry of Mechanically Assisted Circulatory Support program.
Keywords
recurrent event
competing risk
hypothesis testing
structural model
Multi-state models are widely used to study complex interrelated life events. In resource-limited settings, nested case-control (NCC) sampling may be employed to extract subsamples from a cohort for events of interest, followed by a conditional likelihood analysis. However, conditioning restricts the reuse of NCC data for studying additional events. An alternative approach constructs pseudolikelihoods using inverse probability weighting (IPW) with NCC data. Existing IPW-based methods focus on estimating relative risks for multiple or secondary outcomes. We extend these methods to predict transition probabilities under general multi-state models and evaluate their efficiency. We propose two novel approaches for more efficient prediction and derive explicit variance estimators. The first approach calibrates the design weights using cohort-level information, while the second jointly models transitions from the same state. A simulation study demonstrates that either approach substantially improves efficiency and that their combined application yields further gains. We illustrate these methods with real data from the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial (PLCO).
Keywords
Cox regression
Multi-state models
Nested case-control design
Pseudolikelihood
Transition probability
Weight calibration
Co-Author(s)
Anastasia Ivanova, University of North Carolina-Chapel Hill
Demetrius Albanes, Division of Cancer Epidemiology and Genetics, National Cancer Institute
Jason Fine, Department of Statistics, University of Pittsburgh
Yei Eun Shin
First Author
Yen Chang, The University of North Carolina at Chapel Hill
Presenting Author
Yen Chang, The University of North Carolina at Chapel Hill
Nonignorable missing covariates frequently arise in survival analysis, leading to biased results when the missing mechanism is incorrectly assumed to be missing at random (MAR). Existing methods for addressing nonignorable missing covariates often rely on strong model identification assumptions, such as the use of instrumental variables, which are challenging to verify in practice. In this paper, we consider the setting where a one-dimensional covariate, referred to as the exposure, is subject to nonignorable missingness, whereas other covariates are fully observed. We propose a novel estimation procedure for parameters in the propensity score model for the exposure variable by assuming a Gaussian mixture form for the conditional density of the exposure given observed covariates among the subjects with a non-missing exposure. Fractional weights that depend on this conditional density as well as the parameters in the propensity score model can be constructed. We then conduct statistical inference for Cox regression parameters using the generated fractional weights. Our approach offers a flexible framework for handling nonignorable missing mechanisms and does not require the self-consistency assumption imposed by traditional multiple imputation methods. Monte Carlo simulations and real-world data applications demonstrate the efficacy of our proposed method, highlighting its potential to provide robust and reliable inferences in survival analysis settings with a nonignorable missing exposure.
Keywords
Fractional imputation
Nonignorable missing
Survival analysis
Co-Author
Sixia Chen
First Author
Kai Ding, University of Oklahoma Health Sciences Center
Presenting Author
Kai Ding, University of Oklahoma Health Sciences Center
In many large cohort epidemiological studies, the case-cohort design has been employed to combat the issue of rare events and to save time, cost, and valuable biological samples. In this work, we will develop case-cohort methodology for multiple endpoints and specifically the recurrent events setting, such as the survival of multiple hip replacements for the same patient. To better handle analysis in recurrent event settings, we will examine two general designs for conducting sampling: "pooled" sampling, or randomly sampling records from all available records, and "event-specific" sampling, or oversampling records from different event occurrences. We will derive results to compare the efficiency of these sampling designs as well as the utility of event-specific sampling in the presence of a lack of events or under-represented subgroups. Such issues will affect statistical power, especially in later event occurrences. Additionally, we will extend sample size and power calculation from the univariate case-cohort setting and derive the allocation of an optimal ratio of cases to controls to maximize power when restricted by resources.
Keywords
survival analysis
efficient sampling
case-cohort design
recurrent events
correlated data
marginal methods
Co-Author
Daniel Gillen, University of California-Irvine
First Author
Yiren Xu, University of California-Irvine
Presenting Author
Yiren Xu, University of California-Irvine
The study of times to nonterminal events of different types and their interrelation is a compelling area of interest. The primary challenge in analyzing such multivariate event times is the presence of informative censoring by the terminal event. While numerous statistical methods have been proposed for a single nonterminal event, i.e., semi-competing risks data, there remains a dearth of tools for analyzing times to multiple nonterminal events due to their more complex dependence structures. We introduce a novel modeling framework that leverages the vine copula to capture the heterogeneous dependence between each pair of event times in terms of structure and strength. Furthermore, our model allows for regression modeling for all the marginal distributions of times to nonterminal and terminal events. We propose a likelihood-based estimation and inference procedure, which can be implemented efficiently in subsequent stages. Through simulation, we demonstrate the superiority of our approach over existing methods. We also apply our approach to data from a crowdfunding platform to investigate the relationship between creator-backer communication activities and a creator's lifetime.
Keywords
Graphical models
informative censoring
multiple event times
stage-wise estimation
vine copula
The nested case-control (NCC) design is useful for reducing data collection costs in settings where there is a rare event of interest and a covariate that is expensive or burdensome to collect. Instead of using the full cohort for analyses, the NCC design only requires full covariate information from the individuals who experience events (cases) and a subset of the individuals who are still at-risk at each event time (controls). This efficient sampling design has been developed thoroughly in the univariate survival setting; however, there are few examples of its use for multiple event data. We propose a sampling framework and appropriate estimation methods for the multiple event setting that involves implementing the classic NCC design on data stratified by event number. We recommend using this event-specific sampling approach to ensure a balance of controls across event numbers and to allow for the fitting of models with a common or different baseline hazard for each event number.
Keywords
nested case-control design
survival analysis
recurrent events