Skip to main content

Segmentation to Classification

You may want to check the accuracy of a segmentation model by its precision/recall metrics at the image level. The solution we present here will take the segmentation with most pixels. The variation could be

  • Convert to OK/NG
  • Use a defect order to rank the labels instead of pixel size.

Folder Structure#

.├── custom│   ├──│   ├──├── train.yaml└── transforms.yaml

SegmentationToClassification Class#

Content of custom/

from landinglens.model_iteration.sdk import BaseTransform, DataItemimport numpy as np

class SegmentationToClassification(BaseTransform):    """Transforms a segmentation output into a classification output.    If there are NG pixels, the output will be the NG class with the most pixels;    otherwise, it will be OK.    """
    def __init__(self, **params):        """Any parameters defined in yaml file will be passed to init. Store the value        passed in self so you can access them in the call method.        """
    def __call__(self, inputs: DataItem) -> DataItem:        """Return a new DataItem with transformed attributes. DataItem has following        attributes:        image - input image.        mask_scores, mask_labels - segmentation mask probabilities and classes.        Returns        -------            A named tuple class DataItem with the modified attributes.        """        # Get scores and labels and raise error if not defined        mask = inputs.mask_scores
        if mask is None:            raise TypeError("'mask_scores' not defined in inputs")
        output_depth = mask.shape[-1]        labels = np.argmax(mask, -1)
        # Count how many labels pixel are in total and sort them        values, counts = np.unique(labels, return_counts=True)        arg_sort = np.argsort(counts)[::-1]        values_sorted = values[arg_sort]        counts_sorted = counts[arg_sort]
        # Get label and score (proportion)        position = 0 if len(values) == 1 else 1        label = int(values_sorted[position])        score = float(counts_sorted[position] / np.sum(counts_sorted))
        return DataItem(            image=inputs.image,            label=label,            score=score,            mask_scores=inputs.mask_scores,            mask_labels=inputs.mask_labels,        )

Use SegmentationToClassification in train.yaml#

dataset:  train_split_key: train  val_split_key: dev  test_split_key: dev
train:  batch_size: 8  epochs: 1000  learning_rate: 0.0001  previous_checkpoint:  validation_run_freq: 15
model:  avi:    Unet:      backbone_name: resnet34      input_shape: [1024, 704, 3]      output_depth: 24      activation: softmax      encoder_weights: imagenet      decoder_block_type: transpose
loss:  CategoricalCrossEntropy:    weights: 5    from_logits: False
eval:  postprocessing:    output_type: classification    transforms:      - CustomTransform:          transform: custom.segmentation_to_classification.SegmentationToClassification
metrics:  - MeanIOU:      num_classes: 24 # TODO: Must match output_depth      from_logits: False      ignore_zero: True      name: mean_iou
monitor_metric:  val_mean_iou: max