Sept. 1, 2022, 1:13 a.m. | Sandra Wankmüller

cs.CL updates on arXiv.org arxiv.org

Transformer-based models for transfer learning have the potential to achieve
high prediction accuracies on text-based supervised learning tasks with
relatively few training data instances. These models are thus likely to benefit
social scientists that seek to have as accurate as possible text-based measures
but only have limited resources for annotating training data. To enable social
scientists to leverage these potential benefits for their research, this paper
explains how these methods work, why they might be advantageous, and what their
limitations …

analysis arxiv introduction learning science social social science text transfer transfer learning transformers

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote