Dec. 4, 2023, 7:34 p.m. | Andrej Baranovskij

Andrej Baranovskij www.youtube.com

The Ollama desktop tool helps run LLMs locally on your machine. This tutorial explains how I implemented a pipeline with LangChain and Ollama for on-premise invoice processing. Running LLM on-premise provides many advantages in terms of security and privacy. Ollama works similarly to Docker; you can think of it as Docker for LLMs. You can pull and run multiple LLMs. This allows to switch between LLMs without changing RAG pipeline.

GitHub repo:
https://github.com/katanaml/llm-ollama-invoice-cpu

0:00 Intro
0:22 Ollama and Why On-Premise …

advantages desktop docker invoice invoice processing langchain llm llms machine ollama on-premise pipeline privacy processing rag running security security and privacy terms think tool tutorial

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Scientist

@ Publicis Groupe | New York City, United States

Bigdata Cloud Developer - Spark - Assistant Manager

@ State Street | Hyderabad, India