TrapTagger AI

Camera traps generate an immense amount of data, a large proportion of which is simply empty images taken due to false triggers. Processing of all the data, but especially these empty images, is extremely time consuming and monotonous. As such, we have set out to streamline the annotation process using AI in two ways.

Empty-Image Removal

The first stage of processing sees every image analysed by Microsoft’s MegaDetector in order to detect all animals, humans, and vehicles contained therein – not what species they are, but simply their potential locations. Thanks to the sheer scale of their database and a conservative confidence threshold, we can then very reliably remove the majority of the empty images from the dataset without them needing to be seen by human eyes.

However, MegaDetector can sometimes mistake odd-looking branches, other camera traps, and other anomalies that it has not encountered before, as animals. These mistakes tend to be static and occur in the same location across a camera’s images and can thus be easily detected and removed. This further reduces the number of empty images that need to be examined.

Importantly, the MegaDetector model has been trained on camera trap data from a very wide variety of ecosystems, and thus performs well on all animal species, in all environments. As such, we are able to handle datasets from any environment, and still drastically reduce the time taken to annotate it by reliably removing the empty images, even if its not from Southern Africa.

Species Classification

Classifying the species contained in each detection forms the second part of the process, and is simultaneously the greatest challenge and the largest force multiplier in the system. Ideally, one would like an AI model capable of accurately identifying every single animal species on the planet, 100% of the time. However, such a classifier is just not realistic at this point in time, and more real-world models of such magnitude simply do not generate sufficient accuracy for stringent applications such as academic studies. Instead, we have chosen to focus – for the time being – on only classifying Southern African species thanks to our partnership with WildCRU. As such, we have been able to achieve outstanding results for 55 regional species by training on a dataset of over 1 million images – the confusion matrix for which appears under the AI performance section below.

But don’t let this Southern-African specialisation deter you! The species-classification step is entirely optional, and you will still be able to use our highly parallelised and optimised manual tagging interface to rapidly annotate your empty-image-free datasets. Then, once you have sufficient annotated data, we can train a model for your ecosystem, allowing you to have all the benefits of automatically-tagged datasets going forward.

AI Performance

Our state-of-the-art AI is capable of identifying 55 different Southern-African species with 99.11% precision, allowing you to be confident in your results.

We also offer our Southern-African species classifier as an API from our site. It returns bounding boxes, species classifications, and confidence scores for a list of image URLs. If you would like to make use of this service – for conservation or commercial purposes – please make contact with us.

Help us, Help You

If you already have a large amount of camera-trap data, and would like to make use of AI species classification, please contact us, and we can look into training something for you.