Annotation

A raw image means nothing to AI software. Using an unsupervised approach the AI can eventually learn to classify different objects based on certain features but that interpretation will mean nothing to a human. At some point the human needs to tell the AI what the crop looks like so that the AI, when it finds it again, can tell the human that its found a crop.

So how do we annotate an image to tell the AI what an object is? There are various methods and they are a balance between accuracy and time/cost.

‘Bounding boxes’ are rectangles drawn in software, around an object. It can be done very quickly in an image with the appropriate software but there are issues. What if the object is partly in shadow, or the contrast is poor, or there are leaves that overlap or the leaves of the target weed are behind another weed of a different species? As a human we can imaging the 3D scene and allow for all of this. AI cant unless we train it very well. Bounding boxes are still used because in certain parts of the world labour is cheap and this kind of annotation is cheap and quick to do.

A better development is to draw a polygon around the object and in this way we can allow to some degree for overlapping leaves. We can take that a step further and fill in that polygon and say that every pixel within that polygon belongs to a certain class. But we can get edge effects. Drawing the polygon either just inside or outside a leaf margin may lose shape information that is crucial to AI feature extraction and classification.

But there is only a single method that can deliver 100% accurate annotation… synthetic imagery and after over 2 years of work, we are ready to release our products to the world.

By creating the objects within the image, we automatically know, to 100% accuracy, to which class every pixel in an image belongs. This has never been achieved by humans, certainly in complex agricultural images. Automatic 100% accurate annotation. The cost and time that is saved is phenomenal and can run into £ millions.