#This version highlights updates below
- Faster model training helps to improve the model iteration quickly. (normally under 10 mins) Available for Object detection and Segmentation projects
- Auto Tuning for the model training identifies parameters automatically that makes training models easy and optimizes model accuracy
- Simple labeling steps with direct labeling in the Data Browser
Please read more details on this release below.
#Training a Model
Iterate model training and data re-labeling quickly (currently available for the Object Detection and Segmentation Projects). You can train a model directly from the Data Browser. Some important changes to note:
Training Speed training jobs will execute much faster in the Data Browser. Your initial train will require some warm up time however subsequent training will occur in under 10 minutes (oftentimes below 5 minutes).
Training Progress can now be seen by clicking the box on the top right bar. Once the model has started training, it lists an estimated time to complete.
Splits are no longer Metadata, instead we have upgraded them to their own object type to allow for greater functionality. We will automatically split 80/20 dev/test. You can manage this per media or change the splits for the entire dataset by clicking the three dots to the right of “Media Split”.
#Direct Labeling in Data Browser
You can now directly label images in the Data Browser, simply select the image from the Data Browser, select the “Label” option on the top left, select a defect type and start labeling.
#OK/NG Image Level Status
Image level status allows you to see whether the image has labels or not. As you label, you’ll see a red “NG” and green “OK” appear on each image.
#Model Performance & Error Analysis
Once your model has trained, you will see the media level performance score at the top right of your browser. It’s important to note that this model is temporary and will be replaced by subsequent models trained in the Data Browser. If the performance is acceptable, you can save the model to the Models section of the platform.
Error Analysis takes place directly in the browser. Simply select “Show Predictions” or use the hotkey ‘s’ to view the predicted labels in the grid view. If you click into an image’s focused view, you will see an overlap of both GT labels (solid box) and prediction (dotted box).
Prediction Performance Scores are visible in the focused media viewer at the bottom left. These scores are measured using the IOU of the ground truth and prediction. You can also sort by score using the “View Options” option in the grid overview.
Details / Confidence Threshold can be accessed using the “Details” button to the right of the model score. From here you can access a variety of detailed confusion matrix views as well as see the impact on performance as you adjust the models confidence threshold.
Train Custom Models using the down arrow next to the Train button. You can adjust transforms, augmentation and hyperparameters as you see fit.