all AI news
Generation-driven Contrastive Self-training for Zero-shot Text Classification with Instruction-following LLM
April 16, 2024, 4:51 a.m. | Ruohong Zhang, Yau-Shian Wang, Yiming Yang
cs.CL updates on arXiv.org arxiv.org
Abstract: The remarkable performance of large language models (LLMs) in zero-shot language understanding has garnered significant attention. However, employing LLMs for large-scale inference or domain-specific fine-tuning requires immense computational resources due to their substantial model size. To overcome these limitations, we introduce a novel method, namely GenCo, which leverages the strong generative power of LLMs to assist in training a smaller and more adaptable language model. In our method, an LLM plays an important role in …
abstract arxiv attention classification computational cs.ai cs.cl domain fine-tuning however inference language language models language understanding large language large language models limitations llm llms novel performance resources scale self-training text text classification training type understanding zero-shot
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Research Scientist (Computer Science)
@ Nanyang Technological University | NTU Main Campus, Singapore
Intern - Sales Data Management
@ Deliveroo | Dubai, UAE (Main Office)