This paper was accepted on the workshop “Self-Supervised Studying – Concept and Follow” at NeurIPS 2022.
Self-supervised illustration studying (SSL) strategies present an efficient label-free preliminary situation for fine-tuning downstream duties. Nevertheless, in quite a few real looking eventualities, the downstream activity may be biased with respect to the goal label distribution. This in flip strikes the discovered fine-tuned mannequin posterior away from the preliminary (label) bias-free self-supervised mannequin posterior. On this work, we re-interpret SSL fine-tuning beneath the lens of Bayesian continuous studying and contemplate regularization by way of the Elastic Weight Consolidation (EWC) framework. We exhibit that self-regularization towards an preliminary SSL spine improves worst sub-group efficiency in Waterbirds by 5% and Celeb-A by 2% when utilizing the ViT-B/16 structure. Moreover, to assist simplify using EWC with SSL, we pre-compute and publicly launch the Fisher Info Matrix (FIM), evaluated with 10,000 ImageNet-1K variates evaluated on massive trendy SSL architectures together with ViT-B/16 and ResNet50 educated with DINO.