April 10, 2024, 4:43 a.m. | Aleksandar Petrov, Philip H. S. Torr, Adel Bibi

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.19698v2 Announce Type: replace
Abstract: Context-based fine-tuning methods, including prompting, in-context learning, soft prompting (also known as prompt tuning), and prefix-tuning, have gained popularity due to their ability to often match the performance of full fine-tuning with a fraction of the parameters. Despite their empirical successes, there is little theoretical understanding of how these techniques influence the internal computation of the model and their expressiveness limitations. We show that despite the continuous embedding space being more expressive than the discrete …

abstract arxiv capabilities context cs.cl cs.lg fine-tuning in-context learning limitations match parameters performance prompt prompting prompt tuning theory type work

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne