Skip to content

Animl comprises a variety of machine learning tools for analyzing ecological data. The package includes a set of functions to classify subjects within camera trap field data and can handle both images and videos.

License

Notifications You must be signed in to change notification settings

conservationtechlab/animl-r

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

313 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

animl v3.1.1

Animl comprises a variety of machine learning tools for analyzing ecological data. The package includes a set of functions to classify subjects within camera trap field data and can handle both images and videos.

Table of Contents

  1. Tips for Use
  2. Models
  3. Installation
  4. Release Notes

Tips for Use

Below are the steps required for automatic identification of animals within camera trap images or videos.

1. File Manifest

First, build the file manifest of a given directory.

library(animl)

imagedir <- "examples/TestData"

#create save-file placeholders and working directories
WorkingDirectory(imagedir, globalenv())

# Read exif data for all images within base directory
files <- build_file_manifest(imagedir, out_file=filemanifest_file, exif=TRUE)

# Process videos, extract frames for ID
allframes <- extract_frames(files, frames=3, out_file=imageframes_file,
                            parallel=T, num_workers=parallel::detectCores())

2. Object Detection

This produces a dataframe of images, including frames taken from any videos to be fed into the classifier. The authors recommend a two-step approach using the 'MegaDector' object detector to first identify potential animals and then using a second classification model trained on the species of interest.

More info on
MegaDetector v5/v1000
MegaDetector v6

#Load the Megadetector model
detector <- load_detector("/Models/md_v5b.0.0.pt", model_type = 'mdv5', device='cuda:0')

# Obtain crop information for each image
mdraw <- detect(detector, allframes, resize_width=1280, resize_height=960, batch_size=4, device='cuda:0')

# Add crop information to dataframe
mdresults <- parse_detections(mdraw, manifest = allframes, out_file = detections_file)

3. Classification

Then feed the crops into the classifier. We recommend only classifying crops identified by MD as animals.

# Pull out animal crops
animals <- get_animals(mdresults)

# Set of crops with MD human, vehicle and empty MD predictions. 
empty <- get_empty(mdresults)

# load class list
classes <- load_class_list("/Models/Southwest/v3/southwest_v3_classes.csv")
class_list <- classes$class

# load the model
model_file <- "/Models/Southwest/v3/southwest_v3.pt"
southwest <- load_classifier(model_file, nrow(class_list))

# obtain species predictions likelihoods
pred_raw <- classify(southwest, animals, resize_width=480, resize_height=480, 
                     out_file=predictions_file, batch_size=16, num_workers=8)

# apply class_list labels and combine with empty set
manifest <- single_classification(animals, empty, pred_raw, class_list$class)

If your data includes videos or sequences, we recommend using the sequence_classification algorithm. This requires the raw output of the prediction algorithm.

# Sequence Classification
manifest <- sequence_classification(animals, empty=empty, pred_raw, classes=class_list,
                                    station_col="station", empty_class="empty")

4. Export

You can export the data into folders sorted by prediction:

manifest <- export_folders(manifest, out_dir=linkdir, out_file=results_file)

or into folders sorted by prediction and by station for export to camtrapR:

manifest <- export_camtrapR(manifest, out_dir=linkdir, out_file=results_file,
                            label_col='prediction', file_col="filepath", station_col='station')

You can also export a .json file formatted for COCO

manifest <- export_coco(manifest, class_list=class_list, out_file='results.json')

Or a .csv file for Timelapse

manifest <- export_folders(manifest, out_dir=linkdir)

Models

The Conservation Technology Lab has several models available for use.

Detectors: MegaDetector v5/v1000
MegaDetector v6

Installation

Requirements

We recommend running animl on a computer with a dedicated GPU.

Python

animl depends on python and will install python package dependencies if they are not available if installed via miniconda.

The R version of animl also depends on the python version to handle the machine learning: animl-py

Animl-r can be installed through CRAN:

install.packages('animl')

Animl will install animl-py and associated dependencies.

Animl-r can also be installed by downloading this repo, opening the animl.Rproj file in RStudio and selecting Build -> Install Package.

Release Notes

New for 3.1.1

  • compatible with animl-py v3.1.1
  • add export_camtrapR()
  • handle on the fly video frame generation
  • bug fixes
  • correct examples and documentation to reflect above changes

Contributors

Kyra Swanson
Mathias Tobler
Edgar Navarro
Josh Kessler
Jon Kohler

About

Animl comprises a variety of machine learning tools for analyzing ecological data. The package includes a set of functions to classify subjects within camera trap field data and can handle both images and videos.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 5

Languages