July 23, 2023, 7:18 p.m. | /u/plain1994

Machine Learning www.reddit.com

Running Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Supporting Llama-2-7B/13B/70B with 8-bit, 4-bit. Supporting GPU inference (6 GB VRAM) and CPU inference. ➡️[https://github.com/liltom-eth/llama2-webui](https://github.com/liltom-eth/llama2-webui)

Successfully running #Llama2 on my Apple Silicon MacBook Air:

[demo](https://twitter.com/liltom_eth/status/1682791729207070720?s=20)

apple apple silicon macbook macbook air machinelearning running silicon

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Consultant Senior Power BI & Azure - CDI - H/F

@ Talan | Lyon, France