Jan. 15, 2024, 1 p.m. | James Briggs

James Briggs www.youtube.com

LLM function calling can be slow, particularly for AI agents. Using Semantic Router's dynamic routes, we can make this much faster and scale to thousands of tools and functions. Here we see how to use it with OpenAI's GPT-3.5 Turbo, but the library also supports Cohere and Llama.cpp for local deployments.

In semantic router there are two types of routes that can be chosen. Both routes belong to the Route object, the only difference between them is that static routes …

agents ai agents cohere cpp deployments dynamic faster function functions gpt gpt-3 gpt-3.5 library llama llm openai scale semantic tools turbo

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Robotics Technician - 3rd Shift

@ GXO Logistics | Perris, CA, US, 92571