all AI news
Super-Prompting: Utilizing Model-Independent Contextual Data to Reduce Data Annotation Required in Visual Commonsense Tasks. (arXiv:2204.11922v1 [cs.CL])
April 27, 2022, 1:11 a.m. | Navid Rezaei, Marek Z. Reformat
cs.CL updates on arXiv.org arxiv.org
Pre-trained language models have shown excellent results in few-shot learning
scenarios using in-context learning. Although it is impressive, the size of
language models can be prohibitive to make them usable in on-device
applications, such as sensors or smartphones. With smaller language models,
task-specific data annotation is needed to fine-tune the language model for a
specific purpose. However, data annotation can have a substantial financial and
time burden for small research groups, startups, and even companies. In this
paper, we analyze …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Software Engineer, Data Tools - Full Stack
@ DoorDash | Pune, India
Senior Data Analyst
@ Artsy | New York City