March 19, 2024, 4:45 a.m. | Ye Hong, Yanan Xin, Simon Dirmeier, Fernando Perez-Cruz, Martin Raubal

cs.LG updates on arXiv.org arxiv.org

arXiv:2311.11749v2 Announce Type: replace-cross
Abstract: Deep neural networks are increasingly utilized in mobility prediction tasks, yet their intricate internal workings pose challenges for interpretability, especially in comprehending how various aspects of mobility behavior affect predictions. This study introduces a causal intervention framework to assess the impact of mobility-related factors on neural networks designed for next location prediction -- a task focusing on predicting the immediate next location of an individual. To achieve this, we employ individual mobility models to generate …

abstract arxiv behavior causal challenges cs.lg cs.si framework impact interpretability mobility networks neural networks physics.soc-ph prediction predictions study tasks through type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States