Feb. 27, 2024, 5:50 a.m. | Hoyoun Jung, Kyung-Joong Kim

cs.CL updates on arXiv.org arxiv.org

arXiv:2308.08758v2 Announce Type: replace
Abstract: Compressed prompts aid instruction-tuned language models (LMs) in overcoming context window limitations and reducing computational costs. Existing methods, which primarily based on training embeddings, face various challenges associated with interpretability, the fixed number of embedding tokens, reusability across different LMs, and inapplicability when interacting with black-box APIs. This study proposes prompt compression with reinforcement learning (PCRL), which is a discrete prompt compression method that addresses these issues. The proposed PCRL method utilizes a computationally efficient …

abstract apis arxiv box challenges compression computational context context window costs cs.ai cs.cl embedding embeddings face instruction-tuned interpretability language language models limitations lms prompt prompts reinforcement reinforcement learning study tokens training type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US