April 24, 2023, 12:48 a.m. | Ziang Xiao, Xingdi Yuan, Q. Vera Liao, Rania Abdelghani, Pierre-Yves Oudeyer

cs.CL updates on arXiv.org arxiv.org

Qualitative analysis of textual contents unpacks rich and valuable
information by assigning labels to the data. However, this process is often
labor-intensive, particularly when working with large datasets. While recent
AI-based tools demonstrate utility, researchers may not have readily available
AI resources and expertise, let alone be challenged by the limited
generalizability of those task-specific models. In this study, we explored the
use of large language models (LLMs) in supporting deductive coding, a major
category of qualitative analysis where researchers …

ai resources analysis arxiv coding contents data datasets expertise gpt gpt-3 information labels labor language language models large datasets large language models llms major process researchers resources study tools utility

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne