Aug. 19, 2022, 1:11 a.m. | Manuel Brack, Patrick Schramowski, Björn Deiseroth, Kristian Kersting

cs.LG updates on arXiv.org arxiv.org

Bootstrapping from pre-trained language models has been proven to be an
efficient approach for building foundation vision-language models (VLM) for
tasks such as image captioning or visual question answering. However, it is
difficult-if not impossible-to utilize it to make the model conform with user's
rationales for specific answers. To elicit and reinforce commonsense reasons,
we propose an iterative sampling and tuning paradigm, called ILLUME, that
executes the following loop: Given an image-question-answer prompt, the VLM
samples multiple candidate rationales, and …

arxiv language language models lg vision

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Management Assistant

@ World Vision | Amman Office, Jordan

Cloud Data Engineer, Global Services Delivery, Google Cloud

@ Google | Buenos Aires, Argentina