Aug. 16, 2023, 12:01 p.m. | /u/JacekPlocharczyk

Machine Learning www.reddit.com

Hey,

I wrote a simple FastAPI service to serve the LLAMA-2 7B chat model for our internal usage (just to avoid using chatgpt in our prototypes).

I thought it could also be beneficial for you to use it if needed.

Feel free to play with it [https://github.com/mowa-ai/llm-as-a-service](https://github.com/mowa-ai/llm-as-a-service)

Tested on Nvidia L4 (24GB) with \`g2-standard-8\` VM at GCP.



Any feedback welcome :)

chat chatgpt fastapi gcp hey llama machinelearning nvidia project serve service simple standard thought usage

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US