Jan. 12, 2024, 7:56 p.m. | Scott Campit, Ph.D.

Towards Data Science - Medium towardsdatascience.com

TinyGPT-V is a “small” vision-language model that can run on a single GPU

Summary

AI technologies are continuing to become embedded in our everyday lives. One application of AI includes going multi-modal, such as integrating language with vision models. These vision-language models can be applied towards tasks such as video captioning, semantic searching, and many other problems.

This week, I’m going to shed a spotlight towards a recent vision-language model called TinyGPT-V (Arxiv | GitHub). What makes this …

application artificial intelligence become captioning embedded language language model language models large language models machine learning modal multi-modal quantization searching semantic small tasks technologies tinygpt-v video vision vision language model vision-language models vision models

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US