March 25, 2024, 4:42 a.m. | Xindi Luo, Zequn Sun, Jing Zhao, Zhe Zhao, Wei Hu

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.14950v1 Announce Type: cross
Abstract: Parameter-efficient finetuning (PEFT) is a key technique for adapting large language models (LLMs) to downstream tasks. In this paper, we study leveraging knowledge graph embeddings to improve the effectiveness of PEFT. We propose a knowledgeable adaptation method called KnowLA. It inserts an adaptation layer into an LLM to integrate the embeddings of entities appearing in the input text. The adaptation layer is trained in combination with LoRA on instruction data. Experiments on six benchmarks with …

abstract arxiv cs.cl cs.lg embeddings finetuning graph key knowledge knowledge graph language language models large language large language models layer llm llms paper peft study tasks type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA