April 16, 2024, 4:51 a.m. | Ruohong Zhang, Yau-Shian Wang, Yiming Yang

cs.CL updates on arXiv.org arxiv.org

arXiv:2304.11872v2 Announce Type: replace
Abstract: The remarkable performance of large language models (LLMs) in zero-shot language understanding has garnered significant attention. However, employing LLMs for large-scale inference or domain-specific fine-tuning requires immense computational resources due to their substantial model size. To overcome these limitations, we introduce a novel method, namely GenCo, which leverages the strong generative power of LLMs to assist in training a smaller and more adaptable language model. In our method, an LLM plays an important role in …

abstract arxiv attention classification computational cs.ai cs.cl domain fine-tuning however inference language language models language understanding large language large language models limitations llm llms novel performance resources scale self-training text text classification training type understanding zero-shot

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist (Computer Science)

@ Nanyang Technological University | NTU Main Campus, Singapore

Intern - Sales Data Management

@ Deliveroo | Dubai, UAE (Main Office)