Fine-Tuning NEO-KD for Robust Multi-Exit Networks | HackerNoon
Briefly

The proposed NEO-KD algorithm introduces a novel approach to adversarial training within multi-exit networks, enhancing robustness against adversarial attacks while maintaining classification accuracy.
Through experiments on datasets such as MNIST and CIFAR-10, our results demonstrate that the exit-balancing strategy effectively reduces performance degradation at later exits compared to existing methods.
In our analysis of confidence thresholds, we outline a systematic method for validating performance across multiple exits in a budgeted prediction setup, crucial for optimizing model efficiency.
We present a comprehensive ablation study that clarifies the impact of various hyperparameters like α and β on the adversarial training process, providing insights for future research.
Read at Hackernoon
[
|
]