TrapTagger Interface
Highly Optimised
Whether you are annotating the last few percent of your dataset that the AI was unsure of, or manually annotating the whole survey for extreme accuracy, you want the process to be as quick and easy as possible. That is why we have built a highly optimised annotation interface that allows multiple simultaneous workers to each process up to 2 500 images per hour, which in turn allows you to run surveys more regularly, and spend more time on your research.
Image Clusters
The entire process is built around the concept of image clusters – where images taken within a short time frame, at the same site, across multiple cameras, can be coalesced together as a single sighting of an animal group. This in turn allows you to annotate the entire cluster by looking at a single image, regardless of the size of the herd, or how long it spent in front of the camera.
Hierarchical Structure
Define your labels in a hierarchical structure, only tag the categories that are of use to you, and come back to them at any time should your needs change. This also lets you to reduce the number of species you need to deal with at any time, and allows you to bring in an expert to annotate the more challenging species categories.
Custom Hotkeys
Annotate an entire cluster of images with a single press of an easy-to-remember button of your choosing.
Knocked-Down Cameras
Sometimes cameras get knocked down, and generate a lot of empty images in a short period of time. Sometimes, they are then picked up by passers-by. Instead of wasting hours looking through empty images, you can simply mark a camera as knocked down and have all of its images removed from the annotation process. It can then easily be checked at a later stage using a binary search interface to rapidly determine if it was ever picked up, and if so, when.
AI Assistance
Regardless of whether you choose to make use of auto-classification, there is built-in AI assistance to make the manual annotation process easier.
- Empty images are removed entirely.
- Images within clusters are ordered from best to worst based on the number of species they contain, and the size and number of their animal detections, which allows you to only look at the first image of a cluster, most of the time.
- If an additional species is detected in an image other than what you have looked at, you are automatically prompted to look at more images in the cluster to ensure that nothing is missed.
- Clusters are ordered according to the species they potentially contain, allowing you to annotate faster.
- Once you have annotated your images, you can check your species labels against the AI, allowing you to find mistakes and hone your results.
- Bounding boxes are automatically generated, only requiring you to correct the occasional one as an additional step, should the count of the animals in each image be of use to you.