all AI news
Breaking Free Transformer Models: Task-specific Context Attribution Promises Improved Generalizability Without Fine-tuning Pre-trained LLMs. (arXiv:2401.16638v1 [cs.CL])
cs.CL updates on arXiv.org arxiv.org
Fine-tuning large pre-trained language models (LLMs) on particular datasets
is a commonly employed strategy in Natural Language Processing (NLP)
classification tasks. However, this approach usually results in a loss of
models generalizability. In this paper, we present a framework that allows for
maintaining generalizability, and enhances the performance on the downstream
task by utilizing task-specific context attribution. We show that a linear
transformation of the text representation from any transformer model using the
task-specific concept operator results in a projection …
arxiv attribution breaking classification context cs.cl datasets fine-tuning framework free language language models language processing llms loss natural natural language natural language processing nlp paper processing strategy tasks transformer transformer models