Jan. 15, 2024, 5:46 a.m. | /u/APaperADay

Machine Learning www.reddit.com

**Paper**: [https://arxiv.org/abs/2401.00849](https://arxiv.org/abs/2401.00849)

**Code**: [https://github.com/showlab/cosmo](https://github.com/showlab/cosmo)

**Models**: [https://huggingface.co/Awiny](https://huggingface.co/Awiny)

**Dataset**: [https://huggingface.co/datasets/Awiny/Howto-Interlink7M](https://huggingface.co/datasets/Awiny/Howto-Interlink7M)

**Project page**: [https://fingerrec.github.io/cosmo/](https://fingerrec.github.io/cosmo/)

**Abstract**:

>In the evolution of Vision-Language Pre-training, shifting from short-text comprehension to encompassing extended textual contexts is pivotal. Recent autoregressive vision-language models like \[Flamingo, PaLM-E\], leveraging the long-context capability of Large Language Models, have excelled in few-shot text generation tasks but face challenges in alignment tasks. Addressing this gap, we introduce the contrastive loss into text generation models, presenting the COntrastive-Streamlined MultimOdal framework (**CosMo**), strategically partitioning the language model …

abstract alignment capability challenges context evolution face few-shot gap language language models large language large language models loss machinelearning palm palm-e pivotal presenting pre-training tasks text text generation textual training vision vision-language models

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South