May 1, 2023, 12:15 a.m. | /u/NaxAlpha

Machine Learning www.reddit.com

Hi all,

I have been doing a lot of experiments lately in regards to extending the context length of transformers. I have documented some of those experiments in a latest post here:

[https://naxalpha.substack.com/p/a-quest-for-very-long-context-part](https://naxalpha.substack.com/p/a-quest-for-very-long-context-part)

To sum it up, I was able to successfully fine-tune ElutherAI's Pythia 1.4b model with a context window of 8k tokens. The model reached the same loss as that of fine-tuning at a context window of 2k tokens within \~30 hours of fine-tuning on a single A100. …

context context window machinelearning pythia transformers

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US