Skip to main content

10 posts tagged with "LandingLens"

View All Tags

· 3 min read

This version highlights updates below#

  • Faster model training helps to improve the model iteration quickly. (normally under 10 mins) Available for Object detection and Segmentation projects
  • Auto Tuning for the model training identifies parameters automatically that makes training models easy and optimizes model accuracy
  • Simple labeling steps with direct labeling in the Data Browser

Please read more details on this release below.

Training a Model#

Iterate model training and data re-labeling quickly (currently available for the Object Detection and Segmentation Projects). You can train a model directly from the Data Browser. Some important changes to note: Local Image

  • Training Speed training jobs will execute much faster in the Data Browser. Your initial train will require some warm up time however subsequent training will occur in under 10 minutes (oftentimes below 5 minutes).

  • Training Progress can now be seen by clicking the box on the top right bar. Once the model has started training, it lists an estimated time to complete. Local Image

  • Splits are no longer Metadata, instead we have upgraded them to their own object type to allow for greater functionality. We will automatically split 80/20 dev/test. You can manage this per media or change the splits for the entire dataset by clicking the three dots to the right of “Media Split”. Local Image

Direct Labeling in Data Browser#

You can now directly label images in the Data Browser, simply select the image from the Data Browser, select the “Label” option on the top left, select a defect type and start labeling. Local Image

OK/NG Image Level Status#

Image level status allows you to see whether the image has labels or not. As you label, you’ll see a red “NG” and green “OK” appear on each image. Local Image

Model Performance & Error Analysis#

Once your model has trained, you will see the media level performance score at the top right of your browser. It’s important to note that this model is temporary and will be replaced by subsequent models trained in the Data Browser. If the performance is acceptable, you can save the model to the Models section of the platform. Local Image

  • Error Analysis takes place directly in the browser. Simply select “Show Predictions” or use the hotkey ‘s’ to view the predicted labels in the grid view. If you click into an image’s focused view, you will see an overlap of both GT labels (solid box) and prediction (dotted box). Local Image

  • Prediction Performance Scores are visible in the focused media viewer at the bottom left. These scores are measured using the IOU of the ground truth and prediction. You can also sort by score using the “View Options” option in the grid overview. Local Image

  • Details / Confidence Threshold can be accessed using the “Details” button to the right of the model score. From here you can access a variety of detailed confusion matrix views as well as see the impact on performance as you adjust the models confidence threshold. Local Image

  • Train Custom Models using the down arrow next to the Train button. You can adjust transforms, augmentation and hyperparameters as you see fit. Local Image

· 2 min read

Upload Segmentation labeled media#

S3 Image

Users can now upload segmentation labeled images to LandingLens in the Pascal VOC format.

On the segmentation upload screen, switch from "Upload unlabeled media" to "Upload labeled media". This will reveal three types of objects to upload:

  • Images you wish to upload
  • Segmentation masks that contain the labels for each image
  • JSON defect map that matches the images to their corresponding labeled class

Drag or choose the relevant items from your folder and choose "upload media". Once complete, your labeled images will be available for review and model training.

Edit labels while reviewing#

Task managers can now edit labels directly in a review task without creating a whole new task. While reviewing a task simply select "Edit" at the bottom right corner and you can make adjustments to the existing labels or create net new ones from scratch. After saving these, any changes made will be recorded for your records later

S3 Image

Re-assign rejected media#

Now you can re-assign rejected media to the original labeler with added notes to help them understand how they can improve their label quality. This not only helps improve the ground truth quality but also helps labelers get feedback and improve their labeling abilities.

S3 Image

After reviewing a task, you'll see an option to re-assign the rejected media (with their comments) to the same labelers.

· 3 min read

Direct Labeling Control#

With the recent addition of Direct Media labeling it's now easier than ever to rapidly label images. However with the improvement comes an increased likelihood of careless labeling and poor ground truth labels.

To help Admins account for this, we've added the ability to limit user access to direct labeling by project. With this access control, Admins can limit users to task based labeling only. This is especially useful with new employees or third party labeling teams who have yet to prove their labels will be consistent with the Defect book.

S3 Image

When adding new users to a private project (Settings>Invite), you'll see a series of checked access controls under "Project Permissions". To revoke direct labeling access, simply uncheck "Direct label" and that user will no longer see the option to label images from the data browser. You can assign a labeling task to these users.

Direct Image Labeling#

When viewing viewing a single image you will see a new UI element at the top left. By default you will be in "View" mode but if you switch to "Label", you can add GT labels directly to each image.

This method will speed up your labeling but be warned, without a reviewer or agreement labeling, there is an increased chance of mislabeled images going unnoticed.

Splits#

S3 Image

We've moved split out of the Metadata section and made it an independent element. You can now adjust split directly in the image view. To Auto Split, from the data browser simply select the three vertical dots next to "Split" and then choose "Auto Split".

Classification Activation Heatmap#

To help with classification error analysis, we've implemented a heatmap "visual explanation" for all classification models. Now when conducting error analysis, users' can see the specific area of each image that caused it to be classified in a specific way.

S3 Image

Here we can see the heatmap highlights the cracked portion of the cement square - this was the area of the image that caused the model to classify this example as "cracked".

You can learn more about the approach here

Training Hour Usage#

You can now see your organizations per project GPU training usage by hours. To see how many hours you've spent training models during a month, choose Projects> View Projects. You'll see the GPU hrs for the current and previous month listed next to each project.

Pause Cloud Instances#

From the device media page you can now pause cloud instances used for deployment. Simply navigate to Devices and choose the vertical three dots under actions on the right of the page.

· 2 min read

Data Browser Updates#

To make managing your media even easier, we've implemented two fundamental changes to how we label and split images:

Direct Image Labeling#

S3 Image

When viewing viewing a single image you will see a new UI element at the top left. By default you will be in "View" mode but if you switch to "Label", you can add GT labels directly to each image.

This method will speed up your labeling but be warned, without a reviewer or agreement labeling, there is an increased chance of mislabeled images going unnoticed.

Splits#

S3 Image

We've moved split out of the Metadata section and made it an independent element. You can now adjust split directly in the image view. To Auto Split, from the data browser simply select the three vertical dots next to "Split" and then choose "Auto Split".

Classification Activation Heatmap#

To help with classification error analysis, we've implemented a heatmap "visual explanation" for all classification models. Now when conducting error analysis, users' can see the specific area of each image that caused it to be classified in a specific way.

S3 Image

Here we can see the heatmap highlights the cracked portion of the cement square - this was the area of the image that caused the model to classify this example as "cracked".

You can learn more about the approach here

Training Hour Usage#

You can now see your organizations per project GPU training usage by hours. To see how many hours you've spent training models during a month, choose Projects> View Projects. You'll see the GPU hrs for the current and previous month listed next to each project.

Pause Cloud Instances#

From the device media page you can now pause cloud instances used for deployment. Simply navigate to Devices and choose the vertical three dots under actions on the right of the page.

· One min read

Delete Projects#

S3 Image

Users can now delete projects they own by navigating to the Settings page using the "Settings" button at the bottom left of the platform. Here you will see two options, Access and Administrative, navigate to Administrative, select "Delete Project", and finally confirm your understanding that links to the project will no longer work & data will be inaccessible.

One thing to note is this delete function simply removes this project from your organization's view. If you want to recover this project or permanently delete, please reach out to the LandingLens staff at support@landing.ai or directly to your support team.

New Upload Experience#

We're pleased to introduce an entirely new upload flow. We now show previews, allow easy meta data attachment and added a collapse function of an upload job so you can continue with other work on the platform while you wait.

S3 Image

· 4 min read

Anomaly Detection#

S3 Image

We're excited to share LandingLens now supports Anomaly Detection Projects! In the event you don't have enough defective images, you can now create an Anomaly Detection model which is trained entirely on normal or ok images. Anomaly Detection allows users to quickly get to model building and deployment. This is useful especially when you're launching a new product and are unsure what sort of defect types to expect.

Data#

Our upload feature enables you to upload and classify pre-organized images, meaning there is no need to go through long labeling tasks. Unlike other models in LandingLens, Anomaly Detection is an Unsupervised Model (it isn't trained explicitly on labeled media). While this saves a lot of time, it means the accuracy of your folder organization is absolutely paramount to training a good model.

In an anomaly detection project, there are two classes:

  • Normal media are used to train the model. Object location and lighting should be as consistent as possible to ensure good anomaly detection.

  • Abnormal media are used to test your model once trained. The anomaly should be clearly visible to ensure detection.

  • To start, organize your images into folders based on whether they are "normal" representations of your item or "abnormal" defective examples of your item.

  • Next, navigate to the upload page and drag & drop each folder to the corresponding section.

S3 Image

  • After uploading, you will see all the images in the data browser tagged with the corresponding class of the upload type.

S3 Image

  • Finally, split the data as you would normally and export. As mentioned earlier, Anomaly Detection models are only trained on "normal" data - as such when you auto-split the data, you'll notice that you are unable to add "abnormal" data to the training set.

S3 Image

Model#

Launching anomaly models is easy - LandingLens automatically sets model type to the latest anomaly model. Simply select your exported data set, adjust any hyperparameters you want and launch the training job.

Once your model run completes, you'll see your new anomaly detection model in the overview page.

Error Analysis#

Anomaly detection models are much more sensitive than traditional models in LandingLens - you should expect a higher overkill (False Positive) rate. With this in mind, Anomaly Detection is a great approach when you're not sure what types of defects to expect.

S3 Image

Deployment & Model Drift Alerting#

We're pleased to announced that LandingLens Deployment now supports Alerts. Reliably monitor each devices' Defect Rate and Model Confidence

To create alerts, navigate to the Deployments module and choose "Manage" from the Alerts card.

S3 Image

Next click "create" to pull up the Alert creation pop-up. Give it a name, select the target device and choose one of the alert options. Today we support two types of alerts

  • Average Defect Rate is the number of defective parts over the total number of parts inspected. Generally it is a good sign if this number is low.
  • Average Confidence measures how confident the model is in all predictions made over the time range. Normally this number should be high.

Next choose a threshold at which you want the alert to be triggered.

  • For Average Defect rate, this is triggered whenever the average value breaches the set threshold.

  • Average Confidence Rate is triggered whenever the average value falls below the set threshold.

Next choose a time window over which you want your "average" to be calculated. In other words, alerts are monitored based on a rolling average. For example, if you were to monitor an average defect rate over a week - the Alert would only be triggered if the rolling average from the previous week breached your set threshold.

Finally, click the + and select the recipients of the alert, which will be delivered in the form of an email.

Human in the Loop (HiLo)#

We're excited to share that we've completed our initial release of Human in the Loop. Now customers can combine insight from both their models and their teammates to ensure complete recall of defective items. Human in the loop augments model performance by sending defective samples to human reviewers. It also can send a random sample of predictions to be double checked by a human. This is ideal for customers just getting started with augmenting their processes with models - HiLo helps your team build confidence in the system by manually checking LandingLens results.

More instruction on this feature to come...

· 4 min read

Classification#

We're excited to share LandingLens now supports Classification Projects! Overall this project type has few differences from Object Detection and Segmentation however there are some key differences to this type that present unique advantages for users looking to train models quickly.

S3 Image

Data#

Classification allows users to quickly get to model building and deployment. Our Classified Folder Upload feature enables you to upload pre-classified images, meaning there is no need to go through long labeling tasks.

  • Simply organize your images into folders by class. Note the folder name will be used to create a defect in your defect book on upload.

S3 Image

  • From your data browser, select Upload Images at the top right, you'll be presented with two options.

    • Unclassified media uploads will be raw status, you can classify them using the platform labeling tool
    • Classified media uploads will be classified based on you folder structure
  • From either upload page, you can either drag your folders/media into the upload area or click the area to select folders from your finder window

S3 Image

  • When uploading Classified folders of media, after adding folders you will see a summary of the images to be uploaded as well as the Defect names to be added. As you can see in this example, all of the class types are new and will create new classes in the defect book. Scroll through to review your media & click upload.

  • Note, you can now collapse the upload window at the top right and do other work on LandingLens while your images upload in the background.

S3 Image

After uploading your images will appear in your defect book. If you uploaded unclassified media you can create labeling tasks as usual. Instead of labeling localized regions of images, task assignees will simply be asked to classify the image into one of the class buckets.

S3 Image

After labeling or uploading classified media, you will be able to see each image tagged with a class. Once you've classified enough media, you can split and export the data to the Model module.

Model#

S3 Image Launching classification models is easy - LandingLens automatically sets model type to the latest classification model. Simply select your exported data set, adjust any hyperparameters you want and launch the training job.

Once your model run completes, you'll see your new classification model in the overview page. For Classification we use AUC or Area Under the Curve. S3 Image

AUC#

The Area Under the Curve (AUC) is the measure of the ability of a classifier to distinguish between classes.

The higher the AUC, the better the performance of the model at distinguishing between the positive and negative classes. When AUC = 1, then the classifier is able to perfectly distinguish between all the Positive and the Negative class points correctly. If, however, the AUC had been 0, then the classifier would be predicting all Negatives as Positives, and all Positives as Negatives.

When 0.5< AUC <1, there is a high chance that the classifier will be able to distinguish the positive class values from the negative class values. This is so because the classifier is able to detect more numbers of True positives and True negatives than False negatives and False positives.

When AUC=0.5, then the classifier is not able to distinguish between Positive and Negative class points. Meaning either the classifier is predicting random class or constant class for all the data points.

So, the higher the AUC value for a classifier, the better its ability to distinguish between positive and negative classes.

Error Analysis#

S3 Image

Error analysis is exactly the same as other project types except for the overview metrics and the image analysis section. In the analysis section, instead of seeing localized labels per image, you will see the specific class at the bottom right of each image.

· One min read

Developer Tools#

Dear Developers,

We've added a new section Developer Tools to our support center platform. It covers articles documenting every command you can use in LandingLens' command-line interface (CLI).

The LandingLens CLI is a developer tool to help you manage your datasets and models directly from your terminal. The LandingLens CLI is simple to install, works on macOS, Windows, & Linux, and offers a range of functionality to make your developer experience with LandingLens better.

With the LandingLens CLI, you can:

  • Upload new images
  • Set metadata for existing images
  • Programmatically launch training and evaluation jobs
  • Fine-tune model hyperparameter
  • Apply custom transformations in augmentation or post-processing

Follow the instruction here to install and setup your LandingLens CLI.

Need a guide on the parameter values for the train and transform config? Please check train YAML references and transform YAML reference.

If you want to see quick start examples for custom transformations, follow our guide to get started.

· 2 min read

Support Center!#

You made it! Welcome to the new LandingLens support center, your all in one tool to help you get the most out of LandingLens. You can always access it from the "Support Center" button on the rop right of the platform. We've divided this center into three sections. Release (ie here) is where you can find details on our latest releases, what was included and how it works. Documentation is a detailed indexed overview of all LandingLens features - anytime you have a question about where to find something or how it works, this section will help.

Project Level Type#

We'll soon be adding new model types to LandingLens, including Classificationa and Anomaly Detection. In an effort to help with the organization of these model and label types, you'll now be prompted to pick a model type when creating a new project. Legacy projects will not be impacted by this change.

S3 Image

Class Level Metrics#

Now you can see per class defect performance metrics in your error analysis reports! This will help you identify how your model is performing on each defect class so you and your team can focus your efforts to improve specific class performance.

S3 Image

Model Versioning#

In order to help you track which version of a model is in deployment we've added a feature which automatically tags models with the date they were added to deployment. Now when you are viewing models on your edge device you will not only see the model name but also the date on which it was added to Deployment from the Models page.

· 4 min read

Segmentation Labeling Tool#

In response to customer feedback, we completely overhauled our current segmentation labeling experience. In addition to new functionality, you will notice an entirely new look and feel to the product UI. We’ll be bringing object detection up to speed soon, but for now we expect the new segmentation labeling tool will greatly improve labeling speed and accuracy.

  • Semantic Segmentation: previously you could attribute one pixel to multiple defects. In an effort to align with our modeling approach, namely Semantic Segmentation, you can now only assign one pixel to each defect. In practice this means that if you’ve labeled one area with Defect A, you can relabel it to Defect B simply by labeling it as B.
  • Hotkeys: One area our research highlighted as essential to speed was Hotkeys. As such, we’ve equipped the entire flow (besides drawing labels) with Hotkey ability. This means you will not need to click around tools. To pull up the hotkey menu, simple click “ / ” key. S3 Image
  • Polygon Labeling When facing a labeling task that requires more precision than you can manage with hand drawn labels, you can now leverage two tools that help you ensure maximum label accuracy. Polygon allows you to draw a shape by clicking points around a shape. To access, simply click the polygon icon at the top and start clicking points - you can also hold down the mouse and draw non-straight lines if needed. To complete the shape, simply click on the first point you started with & you’ll see your segmentation mask render. S3 Image
  • Polyline Labeling Similarly, Polyline is useful when precision is needed on a linear basis. Instead of drawing shapes, this tool allows you to draw interconnected line labels of varying width. Simply select the line icon, choose a desired width and click point to point to draw the linear shape. Once complete, simple hit “enter” key and you’ll see your mask render to the left of the canvas. S3 Image
  • *Defect Book Integration You can now easily pull up your defect book notes from within the labeling tool. Simply select the “i” icon next to the defect name & you’ll see the details for that defect in the defect book appear to the left.

Cloud Inference#

In Deployment, you can now provision Cloud Instance Devices directly from the LandingLens Platform! This feature was largely developed for Demo purposes and has been password protected for now but if you think it is needed, please reach out to your Landing AI contact.

Pascal VOC Export#

You can now export both object detection and segmentation images with their annotations in Pascal VOC format. Simply select your labeled images, select export and check the box to generate Pascal VOC files. NOTE: This tool is not generally available, if you require this functionality please reach out to your Landing AI contact.

This will generate the Pascal VOC file which you will be able to download by clicking on the three dots next to the export job on the Exported Dataset tab. Note, Pascal VOC files will only be available if you checked the box at the export launch page.

Pascal VOC Import (Bounding Box only)#

You can now export both object detection and segmentation images with their annotations in Pascal VOC format. Simply select your labeled images, select export and check the box to generate Pascal VOC files. NOTE: This tool is not generally available, if you require this functionality please reach out to your Landing AI contact.

This will generate the Pascal VOC file which you will be able to download by clicking on the three dots next to the export job on the Exported Dataset tab. Note, Pascal VOC files will only be available if you checked the box at the export launch page.