Sept. 18, 2023, 1 p.m. | AI Coffee Break with Letitia

AI Coffee Break with Letitia www.youtube.com

How does LoRA work? Low-Rank Adaptation for Parameter-Efficient LLM Finetuning explained.

📜 „Lora: Low-rank adaptation of large language models“ Hu, E.J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L. and Chen, W., 2021. https://arxiv.org/abs/2106.09685
📚 https://sebastianraschka.com/blog/2023/llm-finetuning-lora.html
📽️ LoRA implementation: https://youtu.be/iYr1xZn26R8

Thanks to our Patrons who support us in Tier 2, 3, 4: 🙏
Dres. Trost GbR, Siltax, Vignesh Valliappan, Mutual Information, Kshitij

Outline:
00:00 LoRA explained
00:59 Why finetuning LLMs is costly
01:44 How LoRA works …

explained finetuning information llm llms lora low support work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Sr. VBI Developer II

@ Atos | Texas, US, 75093

Wealth Management - Data Analytics Intern/Co-op Fall 2024

@ Scotiabank | Toronto, ON, CA