all AI news
FPT: Feature Prompt Tuning for Few-shot Readability Assessment
April 4, 2024, 4:47 a.m. | Ziyang Wang, Sanwoo Lee, Hsiu-Yuan Huang, Yunfang Wu
cs.CL updates on arXiv.org arxiv.org
Abstract: Prompt-based methods have achieved promising results in most few-shot text classification tasks. However, for readability assessment tasks, traditional prompt methods lackcrucial linguistic knowledge, which has already been proven to be essential. Moreover, previous studies on utilizing linguistic features have shown non-robust performance in few-shot settings and may even impair model performance.To address these issues, we propose a novel prompt-based tuning framework that incorporates rich linguistic knowledge, called Feature Prompt Tuning (FPT). Specifically, we extract linguistic …
abstract arxiv assessment classification cs.cl feature features few-shot however knowledge performance prompt prompt tuning readability results robust studies tasks text text classification type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote