Training Barlow Twins with Small Batch Sizes by Using a Queue of Previous Outputs
Pedro de Carvalho Cayres Pinto, Jose Gabriel Gomes

DOI: 10.14209/sbrt.2023.1570915729
Evento: XLI Simpósio Brasileiro de Telecomunicações e Processamento de Sinais (SBrT2023)
Keywords: self-supervised learning convolutional neural networks deep learning
Abstract
We present two methods based on Barlow Twins, a self-supervised method, to improve training with smaller batches. The first method randomly drops features from the output before computing the loss to reduce the variance. The second method introduces a queue of outputs from previous batches to improve the loss estimate during training. The first method, with a batch size of 64, achieves an accuracy of 64.1%, the second method, with a batch size of 64 and 192 queued outputs, achieves an accuracy of 65.4%, while the original method, with a batch size of 256, achieves an accuracy of 66.0%.

Download