Dec. 17, 2023, 7:30 a.m. | 1littlecoder

1littlecoder www.youtube.com

With the power of Llava Models and Thanks to Ollama's support, you can run GPT-4 Vision like (not the exact match) Mutlimodal models locally on your computers (does not need CPU).

🔗 Links 🔗

My Ollama Intro tutorial - https://www.youtube.com/watch?v=C0GmAmyhVxM
Ollama Llava library - https://ollama.ai/library?q=llava
Ollama Mulitmodal release - https://github.com/jmorganca/ollama/releases/tag/v0.1.15
LLaVA https://llava-vl.github.io/

My previous Ollama Tutorial (Web UI)

https://www.youtube.com/watch?v=wxvFr4T7irs

❤️ If you want to support the channel ❤️
Support here:
Patreon - https://www.patreon.com/1littlecoder/
Ko-Fi - https://ko-fi.com/1littlecoder

🧭 Follow me on …

computers cpu gpt gpt-4 gpt-4 vision llava match multimodal multimodal ai ollama power support tutorial vision web

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US