April 3, 2024, 4:43 a.m. | Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.05861v2 Announce Type: replace-cross
Abstract: An increasing number of vision-language tasks can be handled with little to no training, i.e., in a zero and few-shot manner, by marrying large language models (LLMs) to vision encoders, resulting in large vision-language models (LVLMs). While this has huge upsides, such as not requiring training data or custom architectures, how an input is presented to an LVLM can have a major impact on zero-shot model performance. In particular, inputs phrased in an underspecified way …

abstract arxiv cs.ai cs.cl cs.cv cs.lg few-shot language language models large language large language models llms questions reason rephrase tasks training type vision vision-language models visual

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US