May 11, 2023, 2:45 p.m. | The Full Stack

Full Stack Deep Learning www.youtube.com

In this video, Sergey covers the foundational ideas for large language models: core ML, the Transformer architecture, notable LLMs, and pretraining dataset composition.

Download slides and view lecture notes: https://fullstackdeeplearning.com/llm-bootcamp/spring-2023/llm-foundations/

Intro and outro music made with Riffusion: https://github.com/riffusion/riffusion

Watch the rest of the LLM Bootcamp videos here: https://www.youtube.com/playlist?list=PL1T8fO7ArWleyIqOy37OVXsP4hFXymdOZ

00:00 Intro
00:47 Foundations of Machine Learning
12:11 The Transformer Architecture
12:57 Transformer Decoder Overview
14:27 Inputs
15:29 Input Embedding
16:51 Masked Multi-Head Attention
24:26 Positional Encoding
25:32 Skip Connections and Layer …

architecture attention bootcamp core dataset decoder embedding encoding head ideas language language models large language models llm llms machine machine learning multi-head multi-head attention overview positional encoding transformer transformer architecture video

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada