May 12, 2023, 12:42 p.m. | /u/JonDurbin

Machine Learning www.reddit.com

## airoboros-gpt-3.5-turbo-100k-7b

This is a 7b parameter, fine-tuned on 100k synthetic instruction/response pairs generated by gpt-3.5-turbo using my version of self-instruct [airoboros](https://github.com/jondurbin/airoboros)

Context length is 2048. The model is not great at math or step-by-step reasoning, and has some quirks, biases, nuances, etc. inherited from OpenAI (for example, OpenAI tends to generate a lot of content related to climate change & green energy).

Model can be found on [HuggingFace](https://huggingface.co/jondurbin/airoboros-gpt-3.5-turbo-100k-7b)

Links:

* [airoboros](https://github.com/jondurbin/airoboros)
* [instructions.jsonl](https://storage.googleapis.com/airoboros-dump/gpt-3.5-turbo-100k/instructions.jsonl)
* [topics.txt](https://storage.googleapis.com/airoboros-dump/gpt-3.5-turbo-100k/topics-d732f92dd90a1a5337a4a02ddeaec72b.txt)


## Evaluation

I used …

biases change climate climate change context energy etc evaluation example gpt gpt-3 gpt3 gpt-3.5 gpt3.5 green energy machinelearning math openai reasoning responses synthetic

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Computer Vision Engineer

@ Motive | Pakistan - Remote

Data Analyst III

@ Fanatics | New York City, United States

Senior Data Scientist - Experian Health (This role is remote, from anywhere in the U.S.)

@ Experian | ., ., United States

Senior Data Engineer

@ Springer Nature Group | Pune, IN