May 19, 2023, 10:51 a.m. | MLOps.community

MLOps.community www.youtube.com

// Abstract
One of the biggest challenges of getting LLMs in production is their sheer size and computational complexity. This talk explores how smaller specialised models can be used in most cases to produce equally good results while being significantly cheaper and easier to deploy.

// Bio
Meryem is a co-founder of TitanML who is working to make it easier to produce cheaper, faster, and smaller NLP models that are easier to deploy through their deep learning compression and optimization …

abstract cases challenges complexity computational conference deployment good llm llms nlp nlp models production talk

More from www.youtube.com / MLOps.community

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Business Intelligence Developer / Analyst

@ Transamerica | Work From Home, USA

Data Analyst (All Levels)

@ Noblis | Bethesda, MD, United States