May 15, 2023, 12:46 a.m. | Nandini Mundra, Sumanth Doddapaneni, Raj Dabre, Anoop Kunchukuttan, Ratish Puduppully, Mitesh M. Khapra

cs.CL updates on arXiv.org arxiv.org

Adapters have been positioned as a parameter-efficient fine-tuning (PEFT)
approach, whereby a minimal number of parameters are added to the model and
fine-tuned. However, adapters have not been sufficiently analyzed to understand
if PEFT translates to benefits in training/deployment efficiency and
maintainability/extensibility. Through extensive experiments on many adapters,
tasks, and languages in supervised and cross-lingual zero-shot settings, we
clearly show that for Natural Language Understanding (NLU) tasks, the parameter
efficiency in adapters does not translate to efficiency gains compared to …

analysis arxiv benefits deployment efficiency fine-tuning languages through training

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Robotics Technician - Weekend Day Shift

@ GXO Logistics | Hillsboro, OR, US, 97124

Gen AI Developer

@ NTT DATA | Irving, TX, US

Applied AI/ML - Vice President

@ JPMorgan Chase & Co. | LONDON, United Kingdom

Research Fellow (Computer Science/Engineering/AI)

@ Nanyang Technological University | NTU Main Campus, Singapore

Senior Machine Learning Engineer

@ Rasa | Remote - Germany