About TrapTagger


Camera traps are a powerful tool for studying wildlife and are widely used by ecologists around the world. However, these motion-activated cameras can generate immense amounts of data, of which a large proportion is simply empty images generated by the wind and other disturbances. Traditionally, these thousands of images were manually sorted by the ecologists themselves in an error-prone and extremely time-consuming process – often taking longer than the camera-trap survey itself. This presented a massive obstacle to camera-trap-focused research. As such, we wanted to develop a way to make the process drastically shorter, and more accurate. In turn this would allow ecologists to run more camera trap surveys more often, resulting in more complete data, and more statistically robust results across greater and greater areas. Moreover, this would allow ecologists to spend their new-found time on the analysis of the data itself, and make the world a better place.

It was out of this need that TrapTagger was born – an AI-driven tool for processing camera-trap surveys

TrapTagger allows for a hybridised approach between automatic AI classifications and manual annotations with an efficient, and user-friendly interface. It has been developed in close collaboration with the University of Oxford’s Wildlife Conservation Research Unit (WildCRU) through 3rd party philanthropic funding. Together, we have gone back into their archives to annotate, manage, and organise more than 30 surveys conducted over the past decade, totalling in excess of 1 million images from a number of wildlife reserves throughout Southern Africa. With this dataset, we were able to train a state-of-the-art species classifier that can accurately identify 55 different Southern African species, thus making the journey much easier for any ecologists who wish to follow in their footsteps.

Ready to get started with TrapTagger? Let’s chat.