all AI news
Can Large Language Models Follow Concept Annotation Guidelines? A Case Study on Scientific and Financial Domains
June 28, 2024, 4:42 a.m. | Marcio Fonseca, Shay B. Cohen
cs.CL updates on arXiv.org arxiv.org
Abstract: Although large language models (LLMs) exhibit remarkable capacity to leverage in-context demonstrations, it is still unclear to what extent they can learn new concepts or facts from ground-truth labels. To address this question, we examine the capacity of instruction-tuned LLMs to follow in-context concept guidelines for sentence labeling tasks. We design guidelines that present different types of factual and counterfactual concept definitions, which are used as prompts for zero-shot sentence classification tasks. Our results show …
abstract annotation arxiv capacity case case study concept concepts context cs.ai cs.cl domains facts financial ground-truth guidelines instruction-tuned labels language language models large language large language models learn llms question replace scientific study truth type
More from arxiv.org / cs.CL updates on arXiv.org
ReFT: Reasoning with Reinforced Fine-Tuning
2 days, 10 hours ago |
arxiv.org
Exploring Defeasibility in Causal Reasoning
2 days, 10 hours ago |
arxiv.org
A Large Language Model Approach to Educational Survey Feedback Analysis
2 days, 10 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Scientist
@ Ford Motor Company | Chennai, Tamil Nadu, India
Systems Software Engineer, Graphics
@ Parallelz | Vancouver, British Columbia, Canada - Remote
Engineering Manager - Geo Engineering Team (F/H/X)
@ AVIV Group | Paris, France
Data Analyst
@ Microsoft | San Antonio, Texas, United States
Azure Data Engineer
@ TechVedika | Hyderabad, India
Senior Data & AI Threat Detection Researcher (Cortex)
@ Palo Alto Networks | Tel Aviv-Yafo, Israel