Computer vision tasks such as object detection and semantic segmentation have made great progress in the last decade. However, in real-world deployment, we always meet the problem of new environments, new objects, and in general new data distributions. Therefore, it is of great importance to build machine learning models able to cope with these out-of-distribution (OOD) scenarios.
We set our first step to learn a general sense of objectiveness so that the model can detect the existence of novel objects at run time (e.g., wild animals crossing streets). However, the training data often only has a limited number of object categories or have many object unannotated in the background. Thus, the model often misclassifies or even overlooks novel objects. We are investigating novelty-aware training methods, e.g., mitigating the suppression of unannotated objects in the background, and exploiting extra sources of knowledge which are easy to access.