Aug. 30, 2023, 3:34 a.m. | /u/Comfortable_Dirt5590

Machine Learning

Hello r/MachineLearning I'm one of the maintainers of []( \- open-source library to call all LLM APIs using the OpenAI format \[Anthropic, Huggingface, Cohere, TogetherAI, Azure, OpenAI, etc.\].

**I'm writing this post to share some of the strategies we use for using LLMs in production, we've served over 2M+ queries so far**

TLDR: Use Caching + Model Fallbacks for reliability. This post goes into detail of our fallbacks implementation

Using LLMs **reliably** in production involves the following components:

* **Caching** …

caching components embedding implementation llms machinelearning production reliability strategies writing

Senior AI/ML Developer

@ | Remote

Senior Robotics Software Engineer, Research and Development (Remote in US or Canada)

@ Locus Robotics | Nashua, NH, United States

Distinguished Software Engineer (AI/ML)

@ Palo Alto Networks | Santa Clara, CA, United States

eData Analyst

@ NielsenIQ | Sofia, Bulgaria

Data Analyst

@ Ubisoft | Singapore, Singapore

Senior Associate Data Engineering

@ Publicis Groupe | Arlington, VA, United States