Nov. 7, 2023, 1 p.m. | code_your_own_AI

code_your_own_AI www.youtube.com

Parameter Efficient Fine-Tuning of LLM w/ multiple LoRA Adapters.

A deep dive to understand LoRA (low rank adaptation) and its possible configurations, including the 16 LoRA_config parameters for parameter efficient fine-tuning.

Switch between different PEFT adapters and activate or deactivate added PEFT LoRA Adapters to a pre-trined LLM or VLM.

PEFT - LoRA explained, in detail.
Matrix factorization, Singular value decomposition (SVD).

How to add multiple PEFT-LoRA adapters together into one adapter.

#ai
#research
#codegeneration

deep dive explained fine-tuning llm lora low multiple parameters peft vlm

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US