
What is TrapTagger?
TrapTagger is a powerful web application that leverages the latest artificial intelligence (AI) technology to automatically annotate your camera-trap data – by locating and identifying all the animals in your images and videos. It can also help you identify, build up and maintain a database of known individuals based on their distinct coat patterns, as well as provide you with a myriad of built-in analysis tools. What’s more, thanks to it’s cloud-based design, it can help you securely manage and store your vast quantities of data.
Best of all – TrapTagger is completely FREE!
Why?
Camera traps are a powerful tool for studying wildlife and are widely used by conservationists around the world. However, these motion-activated cameras can generate immense amounts of data, of which a large proportion is simply empty images triggered by the wind and other disturbances. Traditionally, these thousands of images/videos were manually sorted by the researchers themselves in an error-prone and extremely time-consuming process – presenting a substantial obstacle to research. As such, we wanted to develop a way to make the process both dramatically faster and more accurate – allowing conservationists to instead focus on their research and make the world a better place.

How Does it Work?
In essence, TrapTagger is a website where you have an account that you log into and upload your images and videos. These are then run through various AI algorithms that can automatically find the animals and determine their species, in turn allowing you to download a csv or spreadsheet indicating how many individuals of a particular species were detected at a specific location at some date and time.
More specifically, we use 3 separate AI algorithms to process your data. The first detects and locates the animals, vehicles and humans in each image/video – in essence also filtering out all the empty ones. The next model then classifies the species of each detected animal, before the final then calculates the similarity between the coat patterns of animals of the same species for the purpose of individual identification.
We believe it’s a waste of time for the AI to incorrectly label your data – so we only allow the AI to automatically label images/videos where we are quite confident that it will be correct. In cases where we detect uncertainty, we rather leave those for manual human annotation. As such, for these cases (typically in the order of a few percent of the dataset), as well as for verification purposes, we provide a number of highly-optimised manual-annotation workflows that can be worked on by a large workforce in parallel.
Animal Detection
The first stage of the processing pipeline is the detection of animals and by implication, the removal of empty files. For this we use the industry-standard MegaDetector due to its unrivalled performance at this task. Moreover, this step also generates bounding boxes that indicate the location of the animals in each image/frame, which in turn can be used to provide a count of the animals in each and estimate overall group size. Thanks to a conservative confidence threshold, the odds of missing an animal are extremely low, especially since they typically would appear over a sequence of images/frames – giving the AI multiple attempts to detect them.
That being said, MegaDetector can sometimes – especially with our conservative thresholds – mistake man-made objects and otherwise odd-looking vegetation and rocks as animals. These false detections, can skew animal counts, upset species classification, and cause annotators to waste time looking at empty images. Thankfully, these mistakes tend to be static and consistently occur in the same location across a large percentage of the footage from a particular camera and can thus be identified. However, things are never quite that straight forward, and one needs to choose a compromise between precision and recall on identifying these. As such, we have also gone with the conservative option of using a lower threshold, and instead rather have users verify the suggested false detections in a quick and easy-to-use workflow as part of the processing steps so as to ensure maximum precision in their data.
For the cases where these false detection sources move around a lot – as is the case with foliage – we have also added the ability to draw polygonal masks that will hide all animal detections fully contained within, saving you from spending hours looking at the grass waving in the breeze.
Most importantly, the MegaDetector model has been trained on a very wide variety of international camera-trap data and thus performs well on all animal species, across all biomes. This means that at the very least, TrapTagger can save you a lot of time by reliably removing the vast majority of your empty files no matter where your data is from.
Species Classification
The next stage of the process is the classification of the species contained in each of the images. Here TrapTagger leverages our own in-house species classifiers in conjunction with custom heuristic algorithms to only label the species where we are quite confident of the classification, instead leaving the challenging examples to the rigour of the human eye.
Ideally, one would like an AI model capable of accurately identifying every single animal species on the planet, 100% of the time. However, such a classifier is just not realistic at this point in time, and more real-world models of such magnitude simply do not generate sufficient precision for stringent applications such as academic studies. Instead, we focus on creating regional species classifiers, training them on our users’ data, thus slowly expanding our reach little by little.
Our currently-available regional classifiers:
South America
North America
Europe
South-East Asia
Southern Africa
Sub-Saharan Africa
Australia
For users whose data does not fall under one of our regional classifiers, we offer a general global classifier that is able to identify a variety of common domestic animals, and then classify the remaining species into a number of useful categories such as felines, canines, birds, leporids, rodents etc. thus allowing you to simply focus on getting your species of interest annotated as rapidly as possible.

Individual Identification
The final stage in the process is the identification of specific individuals. For this we use HotSpotter – the industry-standard open-source coat-pattern-recognition algorithm. Essentially, this algorithm is based on a technique that is traditionally used for human fingerprints, where it identifies matching key points between two images, and generates a score indicating how similar they are. To enhance its performance, we combine it with other AI tools like automatic background masking and flank detection – where the former forces the algorithm to focus on the animal only, and the latter ensures that there are no false matches between opposite sides of animals.
However, AI is not yet at the point of being able to reliably perform this task on its own – as any false match would instantly result in individuals being merged, in turn snowballing until your database is a mess. Therefore, the above AI-generated information is instead combined with spatial and temporal information to provide ordered suggestions – from most similar to least – to your human annotators in an efficient and parallelised workflow, making this process as quick and painless as possible.
As much as it seems to be – AI isn’t magic. As such, the rule of thumb is: if you are unable to reliably ID a particular species yourself, then you cannot expect the AI to help. Moreover, the larger the area on the animal’s body that is uniquely identifiable, the better. Should HotSpotter not work well on your particular species, you are able to switch it off, and still use our streamlined workflow to allow yourself and a team of researchers to quickly and comfortably work in parallel.
Human in the Loop
One cannot fully rely on AI to annotate everything, so TrapTagger was built from the ground up with this in mind. As such, there are many different ways you can annotate your data through TrapTagger’s fast, efficient, and parallelised interfaces. Each is entirely optional, and can be performed on a species-by-species basis, allowing you to only annotate the information you need for your species of interest, and no more.
- Static Detection Check – verify and filter out the sources of static detections like rocks and branches.
- Timestamp Correction – correct the timestamps on footage where extraction failed (due to corruption or metadata stripping).
- Species – label the species contained in each animal-containing trigger event where the AI is uncertain.
- AI Check – examine lower-confidence AI suggestions to find species that were missed, correct manual annotation mistakes and otherwise hone your results.
- Informational Tagging – add custom informational tags like age, sex and behaviour to your species of interest.
- Sighting (Box) Correction – correct the AI-generated bounding boxes to get 100% accurate counts and locations.
- Individual Identification – Build up a library of known individuals using AI-generated suggestions.

A Myriad of Output Formats
Explore – view your labelled images in the platform filtered by species, location, timestamp etc.
CSV – export the entire database of information in a customisable csv file with information such as coordinates, timestamps, species, bounding boxes etc.
COCO File – export your data in the COCO format for easy import into other systems.
Image/Video Download – download your images with species & informational labels in their metadata, optionally sorted into species folders etc.
Sorted Local Files – or instead point the platform to your local copy of the data and have it sort, rename, and metadata label them without having to download a thing.

Analysis Tools
Despite the wide variety of output formats, TrapTagger also offers a number of built-in analysis tools – including common statistical analyses based on popular R packages – meaning that for most applications you won’t even need to leave the platform
- Maps & Heatmaps
- Graphing
- Diversity Indices
- Occupancy
- Activity Patterns
- Spatial Capture-Recapture
Why the Cloud?
There are obvious downsides to using a web application to annotate camera-trap data, with the primary one being the need to upload many thousands of images and videos to the cloud – especially when you are out in the field with limited connectivity. However, AI algorithms are very computationally intensive and in most cases the data would take longer to process on the average laptop than to simply upload it where one has access to entire fleets of servers. Moreover, with the increased proliferation of low-Earth-orbit satellite-internet services, connectivity issues are slowly becoming a thing of the past.
Overall, we have found that the advantages of cloud computing – backed-up data, worldwide data accessibility, widely-distributed networks of human annotators, rapid and regular software updates, etc. – far outweigh the other drawbacks. However, we have not lost sight of who our users are, and have endeavoured to optimise the platform as much as possible for use in the field by making it as data-efficient as possible.
If you are unable to upload your data from your institution, please contact us about sending a hard drive of data that we can upload for you.
Data Ownership & Privacy
- All data uploaded to the platform remains under the ownership of the original owner(s). It is not transferred in any way.
- The data will never be shared with anybody – directly or publicly.
Data Security
We fully understand the sensitivity of your data and endeavour to ensure that it remains as secure as possible in a number of ways:
- All data is safely stored in professional cloud platforms offered by Amazon Web Services.
- All data is only accessible behind login walls.
- You can control which members of your organisation have what level of access to your data.
- All data is safely transferred back and forth to our servers using encryption.
Video Tutorials
Learning to use TrapTagger is made easy with a full set of step-by-step video tutorials!