Fool Me Once: Robust Selective Segmentation via Out-of-Distribution Detection with Contrastive Learning

Overview:

Neural networks often fail when presented with data that is distinct from the data on which they were trained. Therefore, we should train networks to detect when this occurs - especially in safety-critical settings such as robotics. Our work is motivated by the fact that large-scale image datasets exist, which contain many images that are distinct from any given specialist image dataset. In the setting of semantic segmentation, this method leverages a large-scale image dataset to train a segmentation network to explicitly represent out-of-distribution images as distinct from the specialist training dataset.

Link to $\mathrm{ar\chi iv}$.

BibTeX:

@article{williams2021foolmeonce,
  author    = {Williams, David and Gadd, Matthew and De Martini, Daniele and Newman, Paul},
  title     = {Fool Me Once: Robust Selective Segmentation via Out-of-Distribution Detection with Contrastive Learning},
  journal   = {IEEE International Conference on Robotics and Automation (ICRA)},
  year      = {2021},
}

Paper: