April 25, 2023, 12:11 p.m. | AI Coffee Break with Letitia

AI Coffee Break with Letitia www.youtube.com

We explain why large language models (LLM) suffer from multiple personality disorder and how they can morally self-correct with instructions. We elucidate technical terms such as RLHF (reinforcement learning from human feedback), explain "instruction following" and "Chain of Thought" prompting (CoT).
► Sponsor: Salad 👉 https://bit.ly/SaladCloud-Letitia

Check out our #MachineLearning Quiz Questions: https://www.youtube.com/c/AICoffeeBreak/community

📜 Ganguli, Deep, Amanda Askell, Nicholas Schiefer, Thomas Liao, Kamilė Lukošiūtė, Anna Chen, Anna Goldie et al. "The capacity for moral self-correction in large language models." arXiv …

explained feedback human human feedback information language language models large language models llm multiple paper personality prompting question answering reinforcement reinforcement learning rlhf support technical terms thought

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Risk Management - Machine Learning and Model Delivery Services, Product Associate - Senior Associate-

@ JPMorgan Chase & Co. | Wilmington, DE, United States

Senior ML Engineer (Speech/ASR)

@ ObserveAI | Bengaluru