March 25, 2024, 4:42 a.m. | Xindi Luo, Zequn Sun, Jing Zhao, Zhe Zhao, Wei Hu

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.14950v1 Announce Type: cross
Abstract: Parameter-efficient finetuning (PEFT) is a key technique for adapting large language models (LLMs) to downstream tasks. In this paper, we study leveraging knowledge graph embeddings to improve the effectiveness of PEFT. We propose a knowledgeable adaptation method called KnowLA. It inserts an adaptation layer into an LLM to integrate the embeddings of entities appearing in the input text. The adaptation layer is trained in combination with LoRA on instruction data. Experiments on six benchmarks with …

abstract arxiv cs.cl cs.lg embeddings finetuning graph key knowledge knowledge graph language language models large language large language models layer llm llms paper peft study tasks type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US