March 3, 2024, 6:45 p.m. | Andrej Baranovskij

Andrej Baranovskij www.youtube.com

I describe how to run Ollama Multimodal with local LlaVA LLM through Ollama. Advantage of this approach - you can process image documents with LLM directly, without running through OCR, this should lead to better results. This functionality is integrated as separate LLM agent into Sparrow.

Sparrow GitHub repo:
https://github.com/katanaml/sparrow

0:00 Intro
0:49 Example
3:24 Code
5:50 Summary

CONNECT:
- Subscribe to this YouTube channel
- Twitter: https://twitter.com/andrejusb
- LinkedIn: https://www.linkedin.com/in/andrej-baranovskij/
- Medium: https://medium.com/@andrejusb

#rag #llm #llamaindex

agent documents example github github repo image intro llamaindex llava llm multimodal ocr ollama process results running through

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York