While standard Empirical Risk Minimization (ERM) training is proven effective for image classification on in-distribution data, it fails to perform well on out-of-distribution samples. One of the main sources of distribution shift for image classification is the compositional nature of images. Specifically, in addition to the main object or component(s) determining the label, some other image components usually exist, which may lead to the shift of input distribution between train and test environments. More importantly, these components may have spurious correlations with the label. To address this issue, we propose Decompose-and-Compose (DaC), which improves robustness to correlation shift by a compositional approach based on combining elements of images. Based on our observations, models trained with ERM usually highly attend to either the causal components or the components having a high spurious correlation with the label (especially in datapoints on which models have a high confidence). In fact, according to the amount of spurious correlation and the easiness of classification based on the causal or non-causal components, the model usually attends to one of these more (on samples with high confidence). Following this, we first try to identify the causal components of images using class activation maps of models trained with ERM. Afterward, we intervene on images by combining them and retraining the model on the augmented data, including the counterfactual ones. Along with its high interpretability, this work proposes a group-balancing method by intervening on images without requiring group labels or information regarding the spurious features during training. The method has an overall better worst group accuracy compared to previous methods with the same amount of supervision on the group labels in correlation shift.
Figure 1. Behaviour of a model trained with standard ERM in different datasets. Based on the easiness of inferring the label from the causal or non-causal parts across the whole dataset, the model attends more to one of them, this behaviour is more evident on samples on which the model has a low loss. (a), (b) Average xGradCAM score of Cifar10 (causal) and and MNIST (non-causal) pixels in four loss quantiles of the Dominos train set. The model generally attends more to the non-causal parts, and as the loss decreases, the non-causal attention increases. (c), (d) Average xGradCAM score of foreground (causal) and background (non-causal) pixels in four loss quantiles of the Waterbirds train set. The model generally pays attention to the causal parts, and as the loss decreases, the causal attention increases.
Figure 2. An overview of our DaC method.First, we choose one of these assumptions: the mask or its inverse captures the core parts more effectively. Then, we select low-loss images and merge images with different labels.
@InProceedings{Noohdani_2024_CVPR,
author = {Noohdani, Fahimeh Hosseini and Hosseini, Parsa and Parast, Aryan Yazdan and Araghi, Hamidreza Yaghoubi and Baghshah, Mahdieh Soleymani},
title = {Decompose-and-Compose: A Compositional Approach to Mitigating Spurious Correlation},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024},
pages = {27662-27671}
}