Feb. 8, 2024, 5:46 a.m. | Tzu-Han Lin How-Shing Wang Hao-Yung Weng Kuang-Chen Peng Zih-Ching Chen Hung-yi Lee

cs.CL updates on arXiv.org arxiv.org

Parameter-Efficient Fine-Tuning (PEFT) is increasingly recognized as an effective method in speech processing. However, the optimal approach and the placement of PEFT methods remain inconclusive. Our study conducts extensive experiments to compare different PEFT methods and their layer-wise placement adapting Differentiable Architecture Search (DARTS). We also explore the use of ensemble learning to leverage diverse PEFT strategies. The results reveal that DARTS does not outperform the baseline approach, which involves inserting the same PEFT method into all layers of a …

architecture cs.cl cs.sd differentiable eess.as ensemble explore fine-tuning layer merging peft placement processing search speech speech processing strategies study wise

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote