Camera traps are a powerful tool for studying wildlife and are widely used by ecologists around the world. However, these motion-activated cameras can generate immense amounts of data, of which a large proportion is simply empty images generated by the wind and other disturbances. Traditionally, these thousands of images were manually sorted by the ecologists themselves in an error-prone and extremely time-consuming process – often taking longer than the camera-trap survey itself. This presented a massive obstacle to camera-trap-focused research. As such, we wanted to develop a way to make the process drastically shorter, and more accurate. In turn this would allow ecologists to run more camera trap surveys more often, resulting in more complete data, and more statistically robust results across greater and greater areas. Moreover, this would allow ecologists to spend their new-found time on the analysis of the data itself, and make the world a better place.
It was out of this need that TrapTagger was born – an AI-driven tool for processing camera-trap surveys
How does it work?
Traptagger is an open-source web application that uses the latest artificial intelligence technologies, in combination with highly-optimised manual annotation interfaces, to process camera-trap data. It has been developed in close collaboration with the University of Oxford’s Wildlife Conservation Research Unit (WildCRU) through 3rd party philanthropic funding, and is completely free to use.
We use 3 separate AI algorithms to process your data. One detects and counts the animals in the images, one classifies the species of the animals, and a third calculates the similarity between animal coat patterns, aiding in the identification of individuals. Then, depending on how accurate you need your data to be, and how many man-hours you have at your disposal, you can choose to additionally manually annotate your images using the AI as an aid, and then check yourself against the AI to further hone your results. Alternatively, you can rely on the AI’s annotations, and only annotate the images where it is not sufficiently certain of their contents.
Since the animal detector – Microsoft’s MegaDetector – was trained on an international dataset, it should work well on your data regardless of where it’s from. So even if we do not have a species classifier for your region, you can still use TrapTagger to manually annotate your animal-containing images. Once you have enough data annotated, we can train a species classifier for your region, unlocking the rest of TrapTagger’s capabilities!
There are many different ways you can annotate your data through TrapTagger’s fast, efficient, and parallelised interfaces. Each is entirely optional, and can be performed on a species-by-species basis, allowing you to only annotate the information you need for your species of interest, and no more.
- Cluster Species – label the species contained in each animal-containing cluster
- AI Check – check your species labels against the AI to correct mistakes and hone your results
- Informational Tagging – add custom informational tags to the clusters containing your species of interest
- Sighting (Box) Correction – correct the AI-generated boxes to get 100% accurate counts of animals in each image
- Individual Identification – Identify specific individuals based on AI-generated suggestions
Why the Cloud?
There are obvious downsides to using a web application to annotate camera-trap data, with the primary one being the need to upload many thousands of images to the cloud – especially when you are out in the field with limited connectivity. However, artificial intelligence algorithms are very computationally intensive, with the average laptop requiring weeks to process a 200k image survey. Instead, we are able to request server capacity on demand, in turn allowing us to process in excess of 80k images per hour. Moreover, we only need to pay for this increased capacity while it’s in use, in turn keeping our running costs down.
Overall, we have found that the other advantages of cloud computing – backed-up data, worldwide data accessibility etc. – far outweigh the other drawbacks. Especially since we have built the annotation interface to be as data-efficient as possible, using lower-resolution, compressed images that are pre-loaded as you annotate, meaning that you will be more than likely able to annotate from the field.
However, if you are unable to upload your data from your institution, speak to us about sending us a hard drive of data that we can upload for you.
A Myriad of Output Formats
In order for TrapTagger to be as powerful as possible, we have built in a massive array of output formats, in turn allowing it to simply slot into your pre-existing workflows.
- Explore – view your labelled images by species, allowing you to see the fruits of your labour, as well as find and correct mistakes
- Csv – export the entire database of information in a custom csv file, or simply the default format
- Metadata-labelled images – download your images with all labels in their metadata, and additionally have those images sorted by species, or even labelled in their image names
- Export – export your data in formats ready for immediate import into other platforms like WildBook