Adding Random Horizontal Flipping Contributes To Augmentation-Induced Bias | HackerNoon
Briefly

Our trials showed similar trends and results when compared to Section 2.1. However, as could be expected from removing a minor source of regularization such as RHF, overall mean performance was marginally worse across all three datasets.
In this way, we see that RHF compounds with the scaling Random Cropping DA, acting as a 'constant' source of additional regularization, while preserving, if accelerating, the dynamics of test set accuracies as α grows.
With this in mind, Balestriero, Bottou, and LeCun (2022) is validated, as the conclusions reached in the work would likely not have been impacted had RHF been omitted.
While not gravely consequential, this finding should serve as a reminder that caution should be exercised when changing Data Augmentation policies for image classification.
Read at Hackernoon
[
|
]