April 19, 2024, 4:43 a.m. | James Gornet, Matthew Thomson

cs.LG updates on arXiv.org arxiv.org

arXiv:2308.10913v2 Announce Type: replace-cross
Abstract: Humans construct internal cognitive maps of their environment directly from sensory inputs without access to a system of explicit coordinates or distance measurements. While machine learning algorithms like SLAM utilize specialized visual inference procedures to identify visual features and construct spatial maps from visual and odometry data, the general nature of cognitive maps in the brain suggests a unified mapping algorithmic strategy that can generalize to auditory, tactile, and linguistic inputs. Here, we demonstrate that …

abstract access algorithms arxiv automated coding cognitive construct cs.cv cs.lg eess.iv environment environments features humans identify inference inputs machine machine learning machine learning algorithms mapping maps predictive q-bio.nc sensory slam spatial type virtual virtual environments visual

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Data Tools - Full Stack

@ DoorDash | Pune, India

Senior Data Analyst

@ Artsy | New York City