Advances in Time Series and Financial Econometrics

Dhrubajyoti Ghosh Chair
Washington University in St. Louis
 
Tuesday, Aug 5: 8:30 AM - 10:20 AM
4085 
Contributed Papers 
Music City Center 
Room: CC-201B 

Main Sponsor

Business and Economic Statistics Section

Presentations

Frequency identification in Singular Spectrum Analysis

The decomposition of time series into their fundamental components is a key problem in many disciplines. Singular Spectrum Analysis (SSA) is a nonparametric method for time series modeling and forecasting. By applying Singular Value Decomposition (SVD) to the trajectory matrix-or equivalently, by diagonalizing the second-moment matrix-SSA extracts quasi-orthogonal components that maximize variability. These components provide natural estimates of the underlying trend, cycles, and noise in the original time series. However, standard SSA does not explicitly associate these components with specific oscillation frequencies.
We introduce a novel extension of SSA that simultaneously achieves frequency identification and variance diagonalization. As a byproduct, our approach also yields a consistent estimator of the spectral density. We illustrate the performance of the method through simulations and apply it to various real-world datasets including paleoclimate temperature records and Gross Domestic Product data from multiple countries. In the latter application, we disentangle common from idiosyncratic fluctuations per frequency in international business cycles. 

Keywords

signal extraction

time series

singular spectrum analysis

cycles

frequency identification

eigenvalues 

Co-Author(s)

Diego Fresoli, Universidad Autónoma de Madrid
Gabriel Martos-Venturini, Universidad Torcuato Di Tella

First Author

Pilar Poncela, Universidad Autónoma de Madrid

Presenting Author

Pilar Poncela, Universidad Autónoma de Madrid

Efficient Dimension Reduction for Multivariate Time Series Using Partial Envelopes

Overparameterization poses a significant challenge for standard vector autoregressive (VAR) models, particularly in high-dimensional time series, as it restricts the number of variables and lags that can be effectively incorporated. To address this, we introduce partial envelope models designed for efficient dimension reduction in multivariate time series. Our approach provides a parsimonious framework by selectively focusing on key lag variables, leading to substantial efficiency gains in coefficient estimation compared to standard VAR models. By concentrating on a subset of relevant lags, our models enhance estimation efficiency while maintaining predictive accuracy. We demonstrate these efficiency improvements through simulated experiments and a real-data analysis, highlighting the advantages of our proposed partial envelope methodology. 

Keywords

VAR

Dimension reduction

Envelopes 

Co-Author

S. Yaser Samadi, Southern Illinois University-Carbondale

First Author

H.M. Wiranthe Herath, Drake University

Presenting Author

H.M. Wiranthe Herath, Drake University

Determining Generalized Threshold Structures in Threshold Autoregressive Models

Threshold autoregressive (TAR) models are widely used in nonlinear time series analysis, where the autoregressive structure changes according to threshold variables. While previous studies have proposed methods for estimating threshold values, they generally assume that the threshold variable and autoregressive order are known. This study focuses on general scenarios where the threshold variable could be a linear combination of multiple lag variables and aims to address: (1) estimation of the threshold variable by finding the optimal linear combination coefficients, (2) determination of the autoregressive order, and (3) estimation of threshold values and determination of suitable regime number. For efficient computations, we adopt Bayesian optimization to determine threshold structures, which involves a re-parameterization transforming the parameter estimation problem into a model selection problem implemented via a greedy algorithm (Chan et al., 2017). The proposed methodology applies to univariate and multivariate time series and achieves an accurate threshold structure determination resulting in efficient forecasts, validated via simulation studies and applications. 

Keywords

Bayesian optimization

High-dimensional AIC

Threshold autoregression 

Co-Author(s)

Pei-Ching Ho, National Tsing Hua University
Lai-Heng Sim, National Tsing Hua University

First Author

Nan-Jung Hsu, Institute of Statistics and Data Science, National Tsing Hua University

Presenting Author

Nan-Jung Hsu, Institute of Statistics and Data Science, National Tsing Hua University

Estimation and Inference for the Joint Autoregressive Quantile-Expected Shortfall Models

Expected shortfall is defined as the truncated mean of a random variable that falls below a specified quantile level. This statistic is widely recognized as an important risk measure. Motivated by the empirical observation of clustering patterns in financial risks, we consider a joint autoregressive model for both conditional quantile and expected shortfall in this manuscript. Existing estimation methods for such models typically rely on minimizing a nonlinear and nonconvex joint loss function, which is challenging to solve and often yields inefficient estimators. We employ a weighted two-step estimation approach to estimate the proposed models. Our proposed estimator has greater efficiency compared to those obtained by existing methods both theoretically and numerically, for a general class of location-scale family time series. Our empirical results on stock market data indicate that the proposed models effectively capture the clustering patterns and leverage effects on the conditional expected shortfall. 

Keywords

Expected Shortfall

Time Series

Financial Risk Management

Neyman Orthogonality

Quantile Regression 

Co-Author(s)

Kean Ming Tan
Xuming He, Washington University in St. Louis

First Author

Peiyao Cai, University of Michigan, Ann Arbor

Presenting Author

Peiyao Cai, University of Michigan, Ann Arbor

Bootstrap specification tests for multivariate GARCH processes

We develop tests for the correct specification of the conditional distribution in multivariate GARCH models based on empirical processes. We transform the multivariate data into univariate data based on the marginal and conditional cumulative distribution functions specified by the null model. The test statistics considered are based on empirical processes of the transformed data in the presence of estimated parameters. The limiting distributions of the proposed test statistics are model dependent and are not free from the underlying nuisance parameters, making the tests difficult to implement. To address this, we develop a novel bootstrap procedure which is shown to be asymptotically valid irrespective of the presence of nuisance parameters. This approach utilises a particular scalable iterated bootstrap method and is simple to implement as the associated test statistics have simple closed form expressions. A simulation study demonstrates that the new tests perform well in finite samples. A real data example illustrates the testing procedure. 

Keywords

Bootstrap

Multivariate GARCH

Specification test 

Co-Author

Kanchana Nadarajah, University of Sheffield

First Author

Indeewara Perera, University of Sheffield

Presenting Author

Indeewara Perera, University of Sheffield

Diagnostic For Volatility And Local Influence Analysis For The Vasicek Model

The Ornstein-Uhlenbeck process is widely used in financial engineering to describe the dynamics of interest rates, currency exchange rates, and asset price volatilities. Influential observations may significantly affect the validity of inferential procedures and conclusions drawn from them. Identifying atypical data is, therefore, an essential step in any statistical analysis. The local influence approach is a set of methods designed to detect the effect of small perturbations of the model or data on the inference. In this work, we derive local influence methods for stochastic interest rate models typically used to model and predict interest or exchange rates. In particular, we develop and implement local influence diagnostic techniques based on likelihood displacement. We primarily discuss the Vasicek model with perturbation of the variance and the response. Additionally, we propose a statistic to test the hypothesis of constant volatility based on the Gradient test. Finally, we illustrate the methodology using the monthly exchange rate between the US dollar and the Swiss franc over a period exceeding 20 years and assess the performance through a simulation study. 

Keywords

Ornstein-Uhlenbeck processes

Local influence diagnostics

Stochastic interest rate models

Likelihood inference

Gradient test 

Co-Author(s)

Alonso Molina, Pontificia Universidad Catolica de Chile
Isabelle Beaudry

First Author

Manuel Galea, Pontificia Universidad Catolica de Chile

Presenting Author

Isabelle Beaudry