Project Type
LandingLens offers multiple ways to help with inspecting your images - Object Detection, Segmentation, Classification and Anomaly Detection. Projects allow you to manage each one of these functions for a set of images. When you create a new project, choose what you will be doing with your images.
Each method has advantages and disadvantages, but the primary factor to consider is your unique inspection case.
While this article provides a high-level understanding of when to use these methods, we encourage you to leverage our staff's experience to determine the best approach for you.
#
ClassificationClassification assigns a concept or "class" to an entire image. The model learns to recognize similar images & tries to predict their 'class'. This project should be used if your defect is not localized. For example, if we look at the following two images we can tell which is a bolt and which is a washer, not any localized area of pixels. Classification thinks in the same way, considering in the entire image instead of a specific area.
Washer Class | Bolts Class |
---|---|
![]() | ![]() |
Classification models consider the entire image when predicting the class
If you face an inspection task that is non-localized like the examples given above, give classification a try! Other benefits of classification are not having to label images - instead classes are assigned based on the parent upload folder name.
#
Object Detection
Bounding boxes are used to label objects of interest in images. These labeled images teach the model what to look for. Once trained, the model will generate similar boxes to express where it thinks objects are.
Object Detection models are trained to detect specific objects within an image. It is the most popular project type because of its strong performance and straightforward labeling process. While related to classification, it is more specific in what it identifies, applying classification to distinct areas/objects in an image. As you can see in the example, it uses bounding boxes to tell us where each "class" is in the image.
It's important to note that while Classification images can only have one class per image, object detection models can detect multiple types of items in one image as we see above.
#
Segmentation
Segmentation, similar to object detection, learns to look for localized defects within an image. Unlike object detection, segmentation is not limited to bounding boxes - instead users can specify the exact pixels where the defect occurs. For users this means two things, 1) labeling takes much longer since you have to be much more precise while "painting" the defect and 2) inference can be much more precise since you are not feeding in "non-defective" pixels to the model's understanding of what is bad...
Let's unpack that last point - Segmentation can be more precise since you are not feeding in "non-defective" pixels to the model - what we mean by this is, when labeling bounding boxes, you are telling your model that everything within the box is "bad" and in certain cases this means you are including a lot of "OK" pixels in your definition of "bad". Consider the following examples where the user is trying to label a crack with object detection and segmentation.
Object Detection Label | Segmentation Label |
---|---|
![]() | ![]() |
In this example we're trying to detect a crack in the cement tiles. Which label type do you think is more precise at teaching a model to detect the cracks in cement?
The bounding box example on the left teaches the model "everything within this box is a defect". As you can imagine this means the model is likely to be confused, since the box includes many uncracked areas of cement. This is where segmentation comes to the rescue - because we're not limited to rectangular labels, users can build highly accurate datasets that tell the model exactly which pixels are defective and which are not.
Segmentation projects are very powerful but also can be time consuming because labeling takes much longer. Generally, we suggest you start with object detection and, if you are still not hitting your performance goals, try segmentation.
#
Anomaly DetectionIn the event you don't have enough defective images, you can create an Anomaly Detection model that is trained entirely on normal or ok images. Anomaly Detection allows users to quickly get to model building and deployment. This is useful especially when you're launching a new product and are unsure what sort of defect types to expect.
Anomaly Detection models are "unsupervised", meaning there is no labeling of classes. Instead, you upload "normal" images and train the model. You can also upload some "abnormal" images to test the effectiveness of the model once trained.
One thing to note, Anomaly detection models are much more sensitive than traditional models in LandingLens - you should expect a higher overkill (False Positive) rate. With this in mind, Anomaly Detection is a great approach when you're not sure what types of defects to expect.