An eigenspace view reveals how predictor networks and stopgrads provide implicit variance regularization
Published in NeurIPS 2022 Workshop on Self-Supervised Learning, 2022
Our workshop paper explaining why the BYOL and SimSiam methods don’t suffer from representation collapse.
Download here