Feb. 26, 2024, 5:43 a.m. | Zhisheng Lin, Han Fu, Chenghao Liu, Zhuo Li, Jianling Sun

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.15082v1 Announce Type: cross
Abstract: Parameter-efficient fine-tuning (PEFT) has emerged as an effective method for adapting pre-trained language models to various tasks efficiently. Recently, there has been a growing interest in transferring knowledge from one or multiple tasks to the downstream target task to achieve performance improvements. However, current approaches typically either train adapters on individual tasks or distill shared knowledge from source tasks, failing to fully exploit task-specific knowledge and the correlation between source and target tasks. To overcome …

abstract arxiv correlation cs.cl cs.lg current experts fine-tuning improvements knowledge language language models multiple peft performance tasks transfer transfer learning type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US