April 30, 2024, 4:46 a.m. | Jingxue Huang, Xilai Li, Tianshu Tan, Xiaosong Li, Tao Ye

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.17747v1 Announce Type: new
Abstract: Multi-modal image fusion (MMIF) maps useful information from various modalities into the same representation space, thereby producing an informative fused image. However, the existing fusion algorithms tend to symmetrically fuse the multi-modal images, causing the loss of shallow information or bias towards a single modality in certain regions of the fusion results. In this study, we analyzed the spatial distribution differences of information in different modalities and proved that encoding features within the same network …

abstract algorithms architecture arxiv bias cs.cv fusion however image images information loss maps mma modal multi-modal representation space type unet

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US