Implicit variance regularization in non-contrastive SSL
Published in Advances in Neural Information Processing Systems 36 (NeurIPS 2023), 2023
Non-contrastive SSL methods like BYOL and SimSiam rely on asymmetric predictor networks to avoid representational collapse without negative samples. Yet, how predictor networks facilitate stable learning is not fully understood. While previous theoretical analyses assumed Euclidean losses, most practical implementations rely on cosine similarity. To gain further theoretical insight into non-contrastive SSL, we analytically study learning dynamics in conjunction with Euclidean and cosine similarity in the eigenspace of closed-form linear predictor networks. We show that both avoid collapse through implicit variance regularization albeit through different dynamical mechanisms. Moreover, we find that the eigenvalues act as effective learning rate multipliers and propose a family of isotropic loss functions (IsoLoss) that equalize convergence rates across eigenmodes. Empirically, IsoLoss speeds up the initial learning dynamics and increases robustness, thereby allowing us to dispense with the EMA target network typically used with non-contrastive methods. Our analysis sheds light on the variance regularization mechanisms of non-contrastive SSL and lays the theoretical grounds for crafting novel loss functions that shape the learning dynamics of the predictor’s spectrum.
Top and middle rows show the neural updates in different settings, in dimension $M=2$ for visualization. Bottom row shows the evolution of the eigenvalues of $W_\mathrm{P}$ upon training in the settings corresponding to the top row, but in dimensions $N=15$ and $M=10$. a) Omitting the stop-grad leads to representational collapse. b) Applying the stop-grad on the wrong side also leads to collapse with potentially diverging eigenmodes. c) Optimizing the BYOL/SimSiam loss leads to isotropic representations. d) Optimizing the isotropic loss has the same effect, but uniform for all eigenvalues.