April 18, 2023, 7:46 a.m. | /u/icybee666

Machine Learning www.reddit.com

Announcing [FastLoRAChat](https://github.com/bupticybee/FastLoRAChat) , training chatGPT without A100.

​

Releasing model: [https://huggingface.co/icybee/fast\_lora\_chat\_v1\_sunlight](https://huggingface.co/icybee/fast_lora_chat_v1_sunlight)

and training data: [https://huggingface.co/datasets/icybee/share\_gpt\_90k\_v1](https://huggingface.co/datasets/icybee/share_gpt_90k_v1)

​

The purpose of this project is to produce similar result to the Fastchat model, but in much cheaper hardware (especially in non-Ampere GPUs).

This repository combined features of [alpaca-lora](https://github.com/tloen/alpaca-lora) and [Fastchat](https://github.com/lm-sys/FastChat):

1. Like Fastchat, support multilanguage and multi round chat.
2. Like alpaca-lora, support training and inference on low-end graphic cards (using LORA).
3. Opensource everything, include dataset, training code, export model code, and …

alpaca ampere cards chat code consumer data dataset export gpus hardware inference lora low machinelearning opensource project support training

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne