Agriculture
Machine learning for non-machine learning experts
Published on

All you need to know to get the custom detector working
AI models can be good students, but they are not human. They lack your intuition and they see things differently. You need to teach them to see the world through your eyes. To train an AI model to detect objects in an image you need to tell the algorithm WHERE it should learn relevant information and show it examples of WHAT it should —and shouldn’t— learn to find. The first step is to understand how you “see” objects. Think about it. How do you define what the object you are looking for looks like? How do you identify a single unit of this type of object? What are the visual features for which you are looking? Is it the shape? The color? The size? The texture? A concrete part of the object? The combination of all of them under certain circumstances? Once you have identified the key visual features that define your object of interest, you can teach the AI model to find it. For demonstration purposes, we will walk through a challenging sheep detection project, using the custom detector tool on the Picterra platform.WHERE

Where should you define training areas?
Analyze your image and find spots where you have examples of your object of interest and spots where you don’t have them. These spots are called “training areas”. The algorithm will look at them in order to learn. Select some of them to tell the algorithm WHERE it should look for examples of both what you are interested in and what you are not interested in. Keep in mind that the AI model won’t learn from the other sections of your image that you didn’t highlight.Where you have examples: highlight them
These are sections of your image that you highlight to tell the algorithm, “look at this region, here are the examples of what I need you to find”. Each training area should contain multiple examples of your object of interest. It is important to draw a series of training areas that highlight your objects of interest in different contexts. You want to identify sections of your image where your objects of interest appear on different backgrounds, in different distribution configurations, or in different lighting conditions:
Where you don’t have examples: define counterexample areas
Defining areas where you know there are not examples of your object of interest helps the algorithm by enabling it to understand what you are NOT looking for looks like. The AI model will use these sections of your image as counterexamples. It is particularly helpful to draw the attention of the algorithm to areas where you have objects that look similar to your object of interest, but which are not that for which you are looking. It usually also helps if you include spots that are pure background.

WHAT
You have already told the algorithm WHERE are the regions on which it should keep its “eyes” focused. Now it is time to tell it WHAT it should look for. You should start by identifying the visual features that define the object you are interested in. In order to do so, you need to think about what helps you recognize an object as such. The next step is outlining —in other words annotating— these objects. This is the way you communicate to the algorithm WHAT you need it to learn to find.How should you draw your annotations?
Learning how to draw your annotations is an intuitive and experimental process. How do you define a “unit” of this type of object? What is the key visual factor you “see”? Is it the full object? Or is a specific and distinctive part of it? In this case, we started with full-body outlines:


Do you want to go further than detecting the sheep? Do you want to count them?
In this image, the sheep are standing very close to each other, making it a very challenging project to count them individually. We know that the way you annotate an object has an influence on the output, so we decided to explore a few variations in the method of drawing the annotations to check how it affected the outputs. Let’s compare how annotating the objects differently produces different outputs. For reference, in this image, the known sheep count is 433.Outlining full body of the sheep:






How to get it wrong?
Keep in mind training and customizing an AI detection model is an intuitive and iterative process: you will need to explore and test what works best for each type of object you want to detect.However, there are certain mistakes you can avoid:
Not defining training areas. Even if you annotated objects, without training areas this is what the model will see – NOTHING:








Build your own AI detector
Discover what type of annotations work best for the type of object you need to track and for the context they are in. You might be trying to detect a type of object that has a totally different shape, pattern, and color. These objects might appear distributed throughout your image or might be grouped in a different pattern.
(Visited 73 times, 1 visits today)
Continue Reading
Related Topics:AI detection, AI model, AI-powered object detection, custom detector, machine learning, Picterra, power of AI. Picterra

Click to comment
You must be logged in to post a comment Login