I am working on the task of out-of-distribution detection in deep learning tasks. If a machine learning model receives an input that is catergorically different from everything it has seen during training, it cannot make any reasonable inference about that input. However, in many cases it still outputs a prediction with very high confidence, which is not desirable. Using methods from adversarial machine learning and generative modeling, I am trying to make a model's behaviour in domains that lie outside of its expertise more sensible.
Please find more information here.