May 30, 2023, 1 p.m. | Anthony Alford

InfoQ - AI, ML & Data Engineering www.infoq.com

Researchers from Microsoft, the University of Wisconsin–Madison, and Columbia University have open-sourced Large Language and Vision Assistant (LLaVA). LLaVA is based on a CLIP image encoder and a LLaMA language decoder, is fine-tuned on a synthetic instruction-following dataset, and achieved state-of-the-art accuracy on the ScienceQA benchmark.

By Anthony Alford

accuracy ai art assistant benchmark chatbot chatgpt clip columbia university dataset decoder encoder image language large language models llama microsoft ml & data engineering researchers state synthetic university vision

More from www.infoq.com / InfoQ - AI, ML & Data Engineering

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

C003549 Data Analyst (NS) - MON 13 May

@ EMW, Inc. | Braine-l'Alleud, Wallonia, Belgium

Marketing Decision Scientist

@ Meta | Menlo Park, CA | New York City