all AI news
Human vs. LMMs: Exploring the Discrepancy in Emoji Interpretation and Usage in Digital Communication
April 16, 2024, 4:49 a.m. | Hanjia Lyu, Weihong Qi, Zhongyu Wei, Jiebo Luo
cs.CV updates on arXiv.org arxiv.org
Abstract: Leveraging Large Multimodal Models (LMMs) to simulate human behaviors when processing multimodal information, especially in the context of social media, has garnered immense interest due to its broad potential and far-reaching implications. Emojis, as one of the most unique aspects of digital communication, are pivotal in enriching and often clarifying the emotional and tonal dimensions. Yet, there is a notable gap in understanding how these advanced models, such as GPT-4V, interpret and employ emojis in …
abstract arxiv communication context cs.cv digital emoji human information interpretation large multimodal models lmms media multimodal multimodal models processing social social media type usage
More from arxiv.org / cs.CV updates on arXiv.org
Compact 3D Scene Representation via Self-Organizing Gaussian Grids
2 days, 6 hours ago |
arxiv.org
Fingerprint Matching with Localized Deep Representation
2 days, 6 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne