Bayesian Regularized Feed Forward Multilayer Artificial Neural Networks

Hayrettin Okut Instructor
University of Kansas School of Medicine
 
Tuesday, Aug 5: 8:30 AM - 12:30 PM
CE_23 
Professional Development Course/CE 
Music City Center 
Room: CC-110B 
Artificial Neural Networks (ANNs) are inspired by the functioning of the human brain and are capable of executing massively parallel computations, making them powerful tools for tasks like mapping, function approximation, classification, and pattern recognition. ANNs excel at capturing complex, nonlinear relationships between input (predictor) variables and output (response) variables, allowing them to learn intricate functional forms adaptively. However, like other statistical methods such as kernel regression and smoothing splines, ANNs are prone to overfitting, particularly when dealing with high-dimensional data such as genome-wide association studies (GWAS) or microarray data. This can lead to predictions that extend beyond the scope of the training data.

To mitigate overfitting, regularization techniques are employed in ANNs. Regularization, or shrinkage, involves biasing parameter estimates toward plausible values. Common regularization techniques in ANNs include Bayesian Regularization (BR) and early stopping. Early stopping prevents overfitting by limiting the number of weights used in the network, effectively reducing the Vapnik-Chervonenkis dimension. In Bayesian Regularized ANNs (BRANN), regularization is achieved by applying prior distributions to model parameters and penalizing large weights, resulting in smoother mappings.

Keywords

artificial neural network

Bayesian regularization

shrinkage

high-dimensional data

predictive modeling 

Main Sponsor

Section on Statistical Learning and Data Science