A Data-centric Approach to Class-specific Bias in Image Data Augmentation: Conclusion and Limitation | HackerNoon
Briefly

In this study, we empirically demonstrate that data augmentation-induced class-specific biases are not limited to traditional datasets like ImageNet, but also manifest in smaller, diverse datasets such as Fashion-MNIST and CIFAR.
By examining various deep learning architectures including EfficientNetV2S and SWIN Vision Transformer, we uncover that the class-specific DA-induced biases can be mitigated based on the architectural selection.
Our methodology for data augmentation robustness scouting offers a resource-sensitive approach to critically examine DA's effects, which enhances the process of understanding its impact across different model architectures.
The implications of our findings emphasize the necessity for a broader understanding of how data augmentations affect model performance, particularly in less conventional datasets.
Read at Hackernoon
[
|
]