all AI news
Introduction to Neural Transfer Learning with Transformers for Social Science Text Analysis. (arXiv:2102.02111v2 [cs.CL] UPDATED)
cs.CL updates on arXiv.org arxiv.org
Transformer-based models for transfer learning have the potential to achieve
high prediction accuracies on text-based supervised learning tasks with
relatively few training data instances. These models are thus likely to benefit
social scientists that seek to have as accurate as possible text-based measures
but only have limited resources for annotating training data. To enable social
scientists to leverage these potential benefits for their research, this paper
explains how these methods work, why they might be advantageous, and what their
limitations …
analysis arxiv introduction learning science social social science text transfer transfer learning transformers