Jan. 12, 2024, 7:56 p.m. | Scott Campit, Ph.D.

Towards Data Science - Medium towardsdatascience.com

TinyGPT-V is a “small” vision-language model that can run on a single GPU

Summary

AI technologies are continuing to become embedded in our everyday lives. One application of AI includes going multi-modal, such as integrating language with vision models. These vision-language models can be applied towards tasks such as video captioning, semantic searching, and many other problems.

This week, I’m going to shed a spotlight towards a recent vision-language model called TinyGPT-V (Arxiv | GitHub). What makes this …

application artificial intelligence become captioning embedded language language model language models large language models machine learning modal multi-modal quantization searching semantic small tasks technologies tinygpt-v video vision vision language model vision-language models vision models

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Senior Applied Data Scientist

@ dunnhumby | London

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV