all AI news
Podcast: Meryem Arik on LLM Deployment, State-of-the-art RAG Apps, and Inference Architecture Stack
InfoQ - AI, ML & Data Engineering www.infoq.com
In this podcast, Meryem Arik, Co-founder/CEO at TitanML, discusses the innovations in Generative AI and Large Language Model (LLM) technologies including current state of large language models, LLM Deployment, state-of-the-art Retrieval Augmented Generation (RAG) apps, and inference architecture stack for LLM applications.
By Meryem Arikai applications apps architecture art ceo chatgpt co-founder current deployment founder generative generative-ai hugging face inference innovations language language model language models large language large language model large language models llm llm applications ml & data engineering openai open source podcast rag retrieval retrieval augmented generation stack state technologies the infoq podcast