Challenges in Parametric Models

Eric Rancourt Chair
Statistics Canada
 
Monday, Aug 5: 8:30 AM - 10:20 AM
5036 
Contributed Papers 
Oregon Convention Center 
Room: CC-C125 
This session will explore aspects of parametric modelling from various viewpoints including robust estimation, machine learning and a nonparametric view.

Main Sponsor

International Statistical Institute

Co Sponsors

International Statistical Institute

Presentations

WITHDRAWN All Models Are Wrong, but a Set of them is Useful

Prediction performance in practical settings often relies on a single, complex model, which may sacrifice interpretability. The concept of the Rashomon set challenges this paradigm by advocating for a set of equally-performing models rather than a singular one. We introduce the Sparse Wrapper Algorithm (SWAG), a novel multi-model selection method that employs a greedy algorithm combining screening and wrapper approaches. SWAG produces a set of low-dimensional models with high predictive power, offering practitioners the flexibility to choose models aligned with their needs or domain expertise without compromising accuracy. SWAG works in a forward step manner; the user selects a learning mechanism and the SWAG begins by evaluating low-dimensional models. It then systematically builds larger models based on the best-performing ones from previous steps. It results with a set of models called "SWAG models." SWAG's modeling flexibility empowers decision-makers in diverse fields such as genomics, engineering, and neurology. Its adaptability allows it to construct a network revealing the intensity and direction of attribute interactions, providing a more insightful perspective. 

Keywords

Prediction accuracy

multi-model selection

SWAG

Interpretability

Feature importance 

Abstracts


Co-Author(s)

roberto molinari, Auburn University
Gaetan Bakalli, Emlyon Business School
Stéphane Guerrier, University of Geneva
Cesare Miglioli, University of Geneva
Samuel Orso
Nabil Mili, University of Geneva

First Author

Yagmur Yavuzozdemir

Presenting Author

Yagmur Yavuzozdemir

A compromise criterion for weighted least squares estimates

When independent errors in a linear model have non-identity covariance, the ordinary least squares estimate of the model coefficients is less efficient than the weighted least squares estimate. However, the practical application of weighted least squares is challenging due to its reliance on the unknown error covariance matrix. Although feasible weighted least squares estimates, which use an approximation of this matrix, often outperform the ordinary least squares estimate in terms of efficiency, this is not always the case. In some situations, feasible weighted least squares can be less efficient than ordinary least squares. The comparison between these two estimates has significant implications for the application of regression analysis in varied fields, yet such a comparison remains an unresolved challenge despite its seemingly straightforward nature. In this study, we directly address this challenge by identifying the conditions under which feasible weighted least squares estimates using fixed weights demonstrate greater efficiency than the ordinary least squares estimate. These conditions provide guidance for the design of feasible estimates using random weights. They also shed light on how certain robust regression estimates behave with respect to the linear model with normal errors of unequal variance. 

Keywords

heteroscedasticity

M-estimation

linear regression

quasi-convexity 

Abstracts


Co-Author

Didong Li

First Author

Jordan Bryan, University of North Carolina at Chapel Hill

Presenting Author

Jordan Bryan, University of North Carolina at Chapel Hill

Fast Cost-constrained Regression

The conventional statistical models assume the availability of covariates without associated costs, yet real-world scenarios often involve acquisition costs and budget constraints imposed on these variables. Scientists must navigate a trade-off between model accuracy and expenditure within these constraints. In this paper, we introduce fast cost-constrained regression (FCR), designed to tackle such problems with computational and statistical efficiency. Specifically, we develop fast and efficient algorithms to solve cost-constrained problems with the loss function satisfying a quadratic majorization condition. We theoretically establish nonasymptotic error bounds for the algorithm's solution, considering both estimation and selection accuracy. We apply FCR to extensive numerical simulations and four datasets from the National Health and Nutrition Examination Survey. Our method outperforms the latest approaches in various performance measures, while requiring fewer iterations and a shorter runtime. 

Keywords

budget constraints

cost

high dimensional regression

non-convex optimiztion 

View Abstract 3181

Co-Author

Xiao Wang, Purdue University

First Author

HyeongJin Hyun

Presenting Author

HyeongJin Hyun

Complex agent-based vs, simpler conditional probability models: tradeoffs in accuracy and cost

Agent-based and microsimulation models can quickly become structurally and computationally complex, and require substantial efforts to build, parameterize, calibrate and validate. Simpler "back of the envelope" models can provide ballpark estimates in a much shorter time but with lower accuracy. We discuss compensations for complex networks and non-linearities in simpler models with practical applications to policy and epidemiology. We show from the examples of policy evaluation studies how simple models could provide the upper and lower boundaries of the estimates and discuss the utility of population averaged (conditional probabilities), microsimulation, and agent-based models and the tradeoffs of accuracy, cost, and complexity. 

Keywords

Agent-based models

Microsimulation models

Model simplification

Population averaged model

Forecasting

Epidemic model 

Abstracts


Co-Author(s)

Joella Adams, RTI International
Michael Duprey, RTI International

First Author

Georgiy Bobashev, Research Triangle Institute

Presenting Author

Georgiy Bobashev, Research Triangle Institute

Covariate adjustment in randomized trials: comparison of machine learning and parametric models

For estimating the average treatment effect in randomized trials, covariate adjustment improves the efficiency of an estimator with minimal impact on bias and type 1 error. However, there have been insufficient comparisons between parametric models and machine learning-based causal inference methods in randomized settings, specifically considering the trade-offs between a specified model's correctness and its parametric constraints. This study aims to compare the efficiency among the following methods: 1) linear regression models, 2) meta-learners (machine learning-based S-, T-, X-, and DR-learners), and 3) augmented inverse probability weighted estimators (semiparametric or nonparametric machine learning-based specification). In simulation study, the efficiency is improved by meta-learners to the same extent as or more than parametric model, regardless of the correctness of the specification of parametric model. However, some methods have issues such as bias to the null for S-learner. Considering both efficiency and bias, we conclude that DR-learner is a viable potion in modest-sized trials. 

Keywords

randomized controlled trials

covariate adjustment

machine learning

asymptotic efficiency

model misspecification

semiparametric efficient estimators 

View Abstract 2299

Co-Author(s)

Kentaro Sakamaki, Juntendo University
Tomohiro Shinozaki, Tokyo University of Science

First Author

Ryo Hanaoka

Presenting Author

Ryo Hanaoka

Stochastic gradient descent methods and uncertainty quantification in extended CLSNA models

Coevolving Latent Space Networks with Attractors (CLSNA), introduced by Zhu et al. (2023; JRSS-A), model dynamic networks where nodes in a latent space represent social actors, and edges indicate their interactions. Attractors are added at the latent level to capture the notion of attractive and repulsive forces between nodes, borrowing ideas from dynamical systems theory. The reliance of previous work on MCMC and the requirement for nodes to be present throughout the study period make scaling difficult. We address these issues by (i) introducing an SGD-based parameter estimation method, (ii) developing a novel approach for uncertainty quantification using SGD, and (iii) extending the model to allow nodes to join and leave. Simulation results suggest that our approach results in little loss of accuracy compared to MCMC, but can scale to much larger networks. We revisit Zhu et al.'s analysis of longitudinal social networks for the US Congress from social media X and reinvestigate positive and negative forces among political elites. We now overcome an important selection bias in the previous study and reveal a negative force at play within the Republican Party. 

Keywords

Longitudinal social networks

Attractors

Partisan polarization

Dynamic networks analysis

Co-evolving network model 

View Abstract 2654

Co-Author(s)

Xiaojing Zhu, Boston University
Cantay Caliskan, University of Rochester
Dino Christenson, Washington University in St. Louis
Konstantinos Spiliopoulos, Boston University
Dylan Walker, Chapman University
Eric Kolaczyk, McGill University

First Author

Hancong Pan

Presenting Author

Hancong Pan

Nonparametric understanding of parametric tests

One argument against statistical tests, which have come under intense criticism recently, is that the null hypothesis is never true ("all models are wrong but some are useful"), and therefore it is not informative to reject it.

Given a (parametric) test, a general nonparametric space of distributions can be split up into distributions for which the rejection probability is either (a) smaller (or equal) or (b) larger than the nominal test level. These constitute the "effective null hypothesis" and "effective alternative" of the test. When tests are applied, normally there is an informal research hypothesis, which would be translated into a set of statistical models. This set can be called the "interpretative null hypothesis" (or "interpretative alternative" depending on how the test problem is formulated). Understanding whether a statistical test is appropriate in such a situation amounts to understanding how the effective hypotheses relate to the interpretative hypotheses. This is essentially different from the question whether the test's model assumptions hold, which is not required to apply it. 

Keywords

Foundations of statistics

Frequentism

Statistical tests 

View Abstract 3142

First Author

Christian Hennig, Universita Di Bologna

Presenting Author

Christian Hennig, Universita Di Bologna