Oct. 6, 2023, 11:15 a.m. | Prompt Engineering

Prompt Engineering www.youtube.com

In this video, I will show you no-code method to run open source LLMs locally. In this easiest way, we will run Mistral-7B in Ollama and serve it via API.


CONNET:
☕ Buy me a Coffee: https://ko-fi.com/promptengineering
|🔴 Support my work on Patreon: Patreon.com/PromptEngineering
🦾 Discord: https://discord.com/invite/t4eYQRUcXB
📧 Business Contact: engineerprompt@gmail.com
💼Consulting: https://calendly.com/engineerprompt/consulting-call

LINKS:
Ollama: https://ollama.ai/
Github: https://github.com/jmorganca/ollama


Timestamps:
[00:00] Intro
[00:29] Ollama Setup
[02:22] Ollama - Options
[04:14] Ollama API

api business code gmail intro llms mistral no-code open source open source llms patreon promptengineering serve setup show support video work

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Data Architect

@ S&P Global | IN - HYDERABAD SKYVIEW

Data Architect I

@ S&P Global | US - VA - CHARLOTTESVILLE 212 7TH STREET