Sept. 27, 2022, 1:12 a.m. | Xiang Zhang, Huiyuan Yang, Taoyue Wang, Xiaotian Li, Lijun Yin

cs.CV updates on arXiv.org arxiv.org

Recent studies utilizing multi-modal data aimed at building a robust model
for facial Action Unit (AU) detection. However, due to the heterogeneity of
multi-modal data, multi-modal representation learning becomes one of the main
challenges. On one hand, it is difficult to extract the relevant features from
multi-modalities by only one feature extractor, on the other hand, previous
studies have not fully explored the potential of multi-modal fusion strategies.
For example, early fusion usually required all modalities to be present during …

arxiv autoencoder detection masked autoencoder multimodal multimodal learning

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US