This article examines how Neural Collapse (NC)—a phenomenon in deep learning—affects model performance through out-of-distribution (OoD) detection. The authors propose a methodology that leverages L2 normalization in the feature space, which significantly increases robustness against OoD data. Experimental results indicate that the standards used during training play a crucial role in the manifestation of NC and consequently influence model generalization. The research carries substantial implications for future work in deep learning by emphasizing the correlation between NC properties and better OoD performance.
The study introduces a framework to measure Neural Collapse (NC) during model training, which predicts how well machine learning models can generalize to unseen data.
Through experimentation, the authors demonstrate that using L2 normalization in feature space can enhance out-of-distribution (OoD) detection significantly.
#neural-collapse #deep-learning #out-of-distribution-detection #machine-learning-models #l2-normalization
Collection
[
|
...
]