CameraTrapDetectoR: deep learning methods to detect, classify, and count animals in camera trap images.

Conference: Symposium on Data Science and Statistics (SDSS) 2024
06/06/2024: 1:45 PM - 1:50 PM EDT
Lightning 

Description

Camera traps are a popular, non-invasive, and cost-effective way to monitor animal populations, and evaluate animal behavior and ecological processes influencing populations. Examples include but are not limited to the detection of endangered or invasive species, determining species interactions, predicting population dynamics, and the identification of diseased animals. The time and labor required to manually classify potentially millions of images generated by a single camera array presents a significant challenge; reducing this burden facilitates implementation of larger or longer-lasting camera trap arrays, resulting in more comprehensive analyses and better decisions. To address this challenge, a multi-agency USDA team has developed CameraTrapDetectoR - a free, open-source tool that deploys a series of generalizable deep learning object detection models at the class, family, and species taxonomic levels to detect, classify, and count animals in camera trap images. The tool is available as an R package with an R Shiny interface, a desktop application, or a command-line Python script so it can be easily integrated into many analytical pipelines. Crucially, the tool enables users to retain complete data privacy. Each model is independently trained from a dataset of 311584 manually annotated images from 29 unique sites, representing 58 unique families and 177 unique species, currently using a Faster-RCNN model architecture with a ResNet-50 backbone. Median recall accuracy on test data for the most recent models is 87.5% for the species model (n= 78, range 51.2% - 100%), 93.9% for the family model (n=33, range 56.2% - 100%), and 98.3% for the class model (n=5, range 97.1% - 100%). New models are iteratively trained using additional images and state-of-the-art computer vision approaches to increase prediction accuracy on out-of-sample, out-of-site data. The e-poster presentation will include tool demonstration on multiple platforms.

Keywords

Object Detection

Computer Vision

Deep Learning

R Package

Animal Behavior 

Presenting Author

Amira Burns, USDA - ARS - APHIS

First Author

Amira Burns, USDA - ARS - APHIS

CoAuthor(s)

Hailey Wilmer, USDA - ARS
Ryan S. Miller, USDA APHIS CEAH
Patrick E. Clark, USDA Agricultural Research Service
Jay Angerer, USDA Agricultural Research Service

Tracks

Practice and Applications
Symposium on Data Science and Statistics (SDSS) 2024