Aug. 30, 2023, 3:34 a.m. | /u/Comfortable_Dirt5590

Machine Learning www.reddit.com

Hello r/MachineLearning I'm one of the maintainers of [https://github.com/BerriAI/litellm/](https://github.com/BerriAI/litellm/) \- open-source library to call all LLM APIs using the OpenAI format \[Anthropic, Huggingface, Cohere, TogetherAI, Azure, OpenAI, etc.\].

**I'm writing this post to share some of the strategies we use for using LLMs in production, we've served over 2M+ queries so far**

TLDR: Use Caching + Model Fallbacks for reliability. This post goes into detail of our fallbacks implementation

Using LLMs **reliably** in production involves the following components:

* **Caching** …

caching components embedding implementation llms machinelearning production reliability strategies writing

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Field Sample Specialist (Air Sampling) - Eurofins Environment Testing – Pueblo, CO

@ Eurofins | Pueblo, CO, United States

Camera Perception Engineer

@ Meta | Sunnyvale, CA