all AI news
Fine-tuning LLM on your laptop: VRAM vs Shared Memory vs GPU Load, Performance Considerations
DEV Community dev.to
I have been playing with Supervised Fine Tuning and LORA using my laptop with NVIDIA RTX 4060 8GB. The subject of SFT is vast, picking the correct training hyperparams is more magic than science, and there's a good deal of experimentation...
Yet let me share one small finding. GPU utilization and Shared Memory effect on training speed.
I used Stable LM 2 1.6B base model and turned it into a chat model using 4400 samples from OASTT2 dataset. Here is …
ai deal experimentation fine-tuning genai good gpu laptop llm lora machinelearning magic memory nvidia nvidia rtx performance playing rtx rtx 4060 science sft training vast