all AI news
Multi-Prompting Decoder Helps Better Language Understanding
June 11, 2024, 4:42 a.m. | Zifeng Cheng, Zhaoling Chen, Zhiwei Jiang, Yafeng Yin, Shiping Ge, Yuliang Liu, Qing Gu
cs.CL updates on arXiv.org arxiv.org
Abstract: Recent Pre-trained Language Models (PLMs) usually only provide users with the inference APIs, namely the emerging Model-as-a-Service (MaaS) setting. To adapt MaaS PLMs to downstream tasks without accessing their parameters and gradients, some existing methods focus on the output-side adaptation of PLMs, viewing the PLM as an encoder and then optimizing a task-specific decoder for decoding the output hidden states and class scores of the PLM. Despite the effectiveness of these methods, they only use …
abstract adapt apis arxiv as-a-service cs.cl decoder encoder focus inference language language models language understanding maas output parameters prompting service tasks type understanding
More from arxiv.org / cs.CL updates on arXiv.org
ReFT: Reasoning with Reinforced Fine-Tuning
26 minutes ago |
arxiv.org
Exploring Defeasibility in Causal Reasoning
26 minutes ago |
arxiv.org
Jobs in AI, ML, Big Data
AI Focused Biochemistry Postdoctoral Fellow
@ Lawrence Berkeley National Lab | Berkeley, CA
Senior Data Engineer
@ Displate | Warsaw
Solutions Architect
@ PwC | Bucharest - 1A Poligrafiei Boulevard
Research Fellow (Social and Cognition Factors, CLIC)
@ Nanyang Technological University | NTU Main Campus, Singapore
Research Aide - Research Aide I - Department of Psychology
@ Cornell University | Ithaca (Main Campus)
Technical Architect - SMB/Desk
@ Salesforce | Ireland - Dublin