Feb. 15, 2024, 5:46 a.m. | Pascal Passigan, Kidus Yohannes, Joshua Pereira

cs.CL updates on arXiv.org arxiv.org

arXiv:2312.10323v2 Announce Type: replace
Abstract: The wayward quality of continuous prompts stresses the importance of their interpretability as unexpected and unpredictable behaviors appear following training, especially in the context of large language models automating people-sensitive tasks such as resume screening. In this paper we present a novel method of constructing continuous prompts via discrete prompt embeddings and evaluate improvements to continuous prompt interpretability and inference accuracy. For a set of manually designed discrete prompts $\mathcal{D}$, which we tokenize and embed …

abstract arxiv combination context continuous cs.cl embeddings importance interpretability language language models large language large language models linear novel paper people prompt prompts quality resume screening tasks training type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA