AI/ML

Unsupervised Data Augmentation for Consistency Training. (arXiv:1904.12848v5 [cs.LG] UPDATED)




Semi-supervised learning lately has shown much promise in improving deep
learning models when labeled data is scarce. Common among recent approaches is
the use of consistency training on a large amount of unlabeled data to
constrain model predictions to be invariant to input noise. In this work, we
present a new perspective on how to effectively noise unlabeled examples and
argue that the quality of noising, specifically those produced by advanced data
augmentation methods, plays a crucial role in semi-supervised learning. By
substituting simple noising operations with advanced data augmentation methods
such as RandAugment and back-translation, our method brings substantial
improvements across six language and three vision tasks under the same
consistency training framework. On the IMDb text classification dataset, with
only 20 labeled examples, our method achieves an error rate of 4.20,
outperforming the state-of-the-art model trained on 25,000 labeled examples. On
a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms
all previous approaches and achieves an error rate of 5.43 with only 250
examples. Our method also combines well with transfer learning, e.g., when
finetuning from BERT, and yields improvements in high-data regime, such as
ImageNet, whether when there is only 10% labeled data or when a full labeled
set with 1.3M extra unlabeled examples is used. Code is available at
https://github.com/google-research/uda.

Source link







Related posts

This app uses AI to screen for COVID-19 in rural areas — Future Blink

Newsemia

Granulate announces $12M Series A to optimize infrastructure performance

Newsemia

Deep learning for biomedicine: A tutorial

Newsemia

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy