all AI news
Comparing Plausibility Estimates in Base and Instruction-Tuned Large Language Models
March 25, 2024, 4:46 a.m. | Carina Kauf, Emmanuele Chersoni, Alessandro Lenci, Evelina Fedorenko, Anna A. Ivanova
cs.CL updates on arXiv.org arxiv.org
Abstract: Instruction-tuned LLMs can respond to explicit queries formulated as prompts, which greatly facilitates interaction with human users. However, prompt-based approaches might not always be able to tap into the wealth of implicit knowledge acquired by LLMs during pre-training. This paper presents a comprehensive study of ways to evaluate semantic plausibility in LLMs. We compare base and instruction-tuned LLM performance on an English sentence plausibility task via (a) explicit prompting and (b) implicit estimation via direct …
abstract acquired arxiv cs.ai cs.cl however human instruction-tuned knowledge language language models large language large language models llms paper pre-training prompt prompts queries study training type wealth
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Data Architect
@ S&P Global | IN - HYDERABAD SKYVIEW
Data Architect I
@ S&P Global | US - VA - CHARLOTTESVILLE 212 7TH STREET