April 16, 2024, 6:29 a.m. | /u/SeawaterFlows

Machine Learning www.reddit.com

**arXiv**: [https://arxiv.org/abs/2402.14811](https://arxiv.org/abs/2402.14811)

**OpenReview**: [https://openreview.net/forum?id=8sKcAWOf2D](https://openreview.net/forum?id=8sKcAWOf2D)

**Code**: [https://github.com/Nix07/finetuning](https://github.com/Nix07/finetuning)

**Models**: [https://huggingface.co/nikhil07prakash/float-7b](https://huggingface.co/nikhil07prakash/float-7b)

**Project page**: [https://finetuning.baulab.info/](https://finetuning.baulab.info/)

**Abstract**:

>Fine-tuning on generalized tasks such as instruction following, code generation, and mathematics has been shown to enhance language models' performance on a range of tasks. Nevertheless, explanations of how such fine-tuning influences the internal computations in these models remain elusive. We study how fine-tuning affects the internal mechanisms implemented in language models. As a case study, we explore the property of entity tracking, a crucial facet of language …

abstract case case study code code generation explore fine-tuning generalized language language models machinelearning mathematics performance study tasks

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US