all AI news
MMEarth: Exploring Multi-Modal Pretext Tasks For Geospatial Representation Learning
May 7, 2024, 4:43 a.m. | Vishal Nedungadi, Ankit Kariryaa, Stefan Oehmcke, Serge Belongie, Christian Igel, Nico Lang
cs.LG updates on arXiv.org arxiv.org
Abstract: The volume of unlabelled Earth observation (EO) data is huge, but many important applications lack labelled training data. However, EO data offers the unique opportunity to pair data from different modalities and sensors automatically based on geographic location and time, at virtually no human labor cost. We seize this opportunity to create a diverse multi-modal pretraining dataset at global scale. Using this new corpus of 1.2 million locations, we propose a Multi-Pretext Masked Autoencoder (MP-MAE) …
abstract applications arxiv cost cs.ai cs.cv cs.lg data earth earth observation geospatial however human labor location modal multi-modal observation representation representation learning sensors tasks training training data type unique
More from arxiv.org / cs.LG updates on arXiv.org
Efficient Data-Driven MPC for Demand Response of Commercial Buildings
2 days, 14 hours ago |
arxiv.org
Testing the Segment Anything Model on radiology data
2 days, 14 hours ago |
arxiv.org
Calorimeter shower superresolution
2 days, 14 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US