The article discusses a newly proposed methodology to evaluate models by integrating In-Distribution (ID) and Out-of-Distribution (OoD) images into a single test set. It emphasizes how this improves the classification capabilities of deep learning models by adopting a Gaussian Mixture Model (GMM) score for OoD detection rather than traditional softmax outputs. Furthermore, the research highlights the link between Neural Collapse and OoD detection, showing that the proposed methods enhance both speed and robustness in detecting OoD instances while maintaining classification accuracy for ID images.
To evaluate models, we merge ID and OoD images into a single test set. OoD performance is then a binary classification task, where we measure how well OoD images can be separated from ID images using a score derived from our model.
Most research toward uncertainty estimation in deep learning took a Bayesian approach. The high number of parameters in DL models renders posterior integration intractable, so approximations (typically variational inference) have been used.
We link Neural Collapse with Out-of-Distribution (OoD) detection, emphasizing the robustness and speed of our methodology through various experiments, enhancing its practical applicability.
Until recently, informal initiatives around deep learning focused primarily on classification efficacy, with less emphasis on quantifying the uncertainty of model outputs, demonstrating a shift toward an integrative framework.
#deep-learning #out-of-distribution-detection #neural-collapse #machine-learning #evaluation-methodologies
Collection
[
|
...
]