Artificial Intelligence

Each Step its Own AI

Rather than use a single AI to perform the entire process of image processing, each step is performed with its own AI. TrapTagger uses 3 separate AI algorithms – one to detect and count animals, one to classify their species, and one to identify individuals.

Animal Detection & Counting

The first stage of processing is the detection of animals and by implication, the removal of empty images. For this we use Microsoft’s MegaDetector due to its unrivalled performance in this task. Additionally, this step also generates boxes that indicate the location of the animals in the images, which in turn can be used to generate a count of the animals in each image. Thanks to a conservative confidence threshold, we can very reliably remove the majority of the empty images from the dataset without them needing to be seen by human eyes.

However, MegaDetector can sometimes mistake odd-looking branches, other camera traps, and other anomalies that it has not encountered before, as animals. These mistakes tend to be static and occur in the same location across a camera’s images and can thus be easily detected and removed. This further reduces the number of empty images that need to be examined.

Importantly, the MegaDetector model has been trained on international camera trap data from a very wide variety of ecosystems, and thus performs well on all animal species, in all environments. As such, data from biomes where we do not have a species classifier can still take advantage of this first processing step to drastically reduce the number of images that need to be manually annotated through the platform.

Species Classification

The next stage of the process is the classification of the species contained in each of the images. Here TrapTagger leverages a number of post-processing techniques to only label the species in images where the classifier is very confident of the classification, leaving the challenging images to the human eye. Additionally, TrapTagger lets you process your data with one species classifier, and then check those results with another to further hone your results.

Ideally, one would like an AI model capable of accurately identifying every single animal species on the planet, 100% of the time. However, such a classifier is just not realistic at this point in time, and more real-world models of such magnitude simply do not generate sufficient accuracy for stringent applications such as academic studies. Instead, we focus on creating regional classifiers, training them on our user’s data as soon as enough is available, thus slowly expanding our reach little by little.

Our Current Classifiers

At present, we offer the following regional species classifiers: a general sub-Saharan classifier, a more-specialised southern-African classifier, a general European classifier, and a central/southern-American classifier. Additionally, we have both a south-east Asian and an Australian classifier in development. If you have a database of annotated camera trap images from these regions (or anywhere else for that matter), we ask you to please share this with us to help improve the performance of these classifiers and help the community at large.

We Can Train One For You!

Don’t let the lack of a species classifier for your particular region deter you. The automatic species-classification step is entirely optional, and you will still be able to use our highly parallelised and optimised manual tagging interface to rapidly annotate your empty-image-free datasets. Then, once you have sufficient annotated data, we can train a model for your ecosystem, allowing you to have all the benefits of automatically-annotated datasets going forward.

Or Bring Your Own!

TrapTagger is set up to easily integrate third-party species classifiers, which means you can bring your own classifier to use within the TrapTagger environment, and even share it with everybody else! This provides a great opportunity for collaboration between ecology/zoology departments and their computer-science counterparts – and sharing these creations to help the planet.

Individual Identification

The final stage in this process is the Holy Grail of camera trapping: the identification of individuals. This is an extremely challenging problem, even for very experienced researchers. Luckily, some species have made it much easier for us by having very clear and unique coat patterns and this lends itself well to AI processing. This is where HotSpotter comes in – an open source coat-pattern recognition algorithm that has become the industry standard since it was introduced back in 2014.

Unfortunately, HotSpotter is far from the point of being able to reliably perform the task of automatic individual identification by itself. As such, it is used to calculate a similarity score between animals, which in conjunction with our own heuristic algorithm, informs which sets of images should be manually reviewed to determine if they are indeed from the same individual.

This manual review process has been massively streamlined and automated, to make it as easy as possible. This means that if HotSpotter is known to perform poorly on your particular species of interest, you can still use the platform by choosing to switch it off. Instead using the combination of the heuristic algorithm and the automated workflow to more reliably identify individuals than what can be achieved by hand.

Application Programming
Interface (API)

For those of you with your own platforms or systems that could make use of automated species identification, we also offer our classifiers as an API from our site. It returns bounding boxes, species classifications, and confidence scores for a list of image URLs. If you would like to make use of this service – for conservation or commercial purposes – please make contact with us.

Help us, Help You

If you already have a large amount of camera-trap data, and would like to make use of AI species classification, please contact us, and we can look into training something for you.