all AI news
Human vs. LMMs: Exploring the Discrepancy in Emoji Interpretation and Usage in Digital Communication
April 16, 2024, 4:49 a.m. | Hanjia Lyu, Weihong Qi, Zhongyu Wei, Jiebo Luo
cs.CV updates on arXiv.org arxiv.org
Abstract: Leveraging Large Multimodal Models (LMMs) to simulate human behaviors when processing multimodal information, especially in the context of social media, has garnered immense interest due to its broad potential and far-reaching implications. Emojis, as one of the most unique aspects of digital communication, are pivotal in enriching and often clarifying the emotional and tonal dimensions. Yet, there is a notable gap in understanding how these advanced models, such as GPT-4V, interpret and employ emojis in …
abstract arxiv communication context cs.cv digital emoji human information interpretation large multimodal models lmms media multimodal multimodal models processing social social media type usage
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US