April 16, 2024, 6:29 a.m. | /u/SeawaterFlows

Machine Learning www.reddit.com

**arXiv**: [https://arxiv.org/abs/2402.14811](https://arxiv.org/abs/2402.14811)

**OpenReview**: [https://openreview.net/forum?id=8sKcAWOf2D](https://openreview.net/forum?id=8sKcAWOf2D)

**Code**: [https://github.com/Nix07/finetuning](https://github.com/Nix07/finetuning)

**Models**: [https://huggingface.co/nikhil07prakash/float-7b](https://huggingface.co/nikhil07prakash/float-7b)

**Project page**: [https://finetuning.baulab.info/](https://finetuning.baulab.info/)

**Abstract**:

>Fine-tuning on generalized tasks such as instruction following, code generation, and mathematics has been shown to enhance language models' performance on a range of tasks. Nevertheless, explanations of how such fine-tuning influences the internal computations in these models remain elusive. We study how fine-tuning affects the internal mechanisms implemented in language models. As a case study, we explore the property of entity tracking, a crucial facet of language …

abstract case case study code code generation explore fine-tuning generalized language language models machinelearning mathematics performance study tasks

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Lead Data Scientist, Commercial Analytics

@ Checkout.com | London, United Kingdom

Data Engineer I

@ Love's Travel Stops | Oklahoma City, OK, US, 73120