April 16, 2024, 4:49 a.m. | Hanjia Lyu, Weihong Qi, Zhongyu Wei, Jiebo Luo

cs.CV updates on arXiv.org arxiv.org

arXiv:2401.08212v2 Announce Type: replace
Abstract: Leveraging Large Multimodal Models (LMMs) to simulate human behaviors when processing multimodal information, especially in the context of social media, has garnered immense interest due to its broad potential and far-reaching implications. Emojis, as one of the most unique aspects of digital communication, are pivotal in enriching and often clarifying the emotional and tonal dimensions. Yet, there is a notable gap in understanding how these advanced models, such as GPT-4V, interpret and employ emojis in …

abstract arxiv communication context cs.cv digital emoji human information interpretation large multimodal models lmms media multimodal multimodal models processing social social media type usage

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne