April 19, 2024, 4:43 a.m. | James Gornet, Matthew Thomson

cs.LG updates on arXiv.org arxiv.org

arXiv:2308.10913v2 Announce Type: replace-cross
Abstract: Humans construct internal cognitive maps of their environment directly from sensory inputs without access to a system of explicit coordinates or distance measurements. While machine learning algorithms like SLAM utilize specialized visual inference procedures to identify visual features and construct spatial maps from visual and odometry data, the general nature of cognitive maps in the brain suggests a unified mapping algorithmic strategy that can generalize to auditory, tactile, and linguistic inputs. Here, we demonstrate that …

abstract access algorithms arxiv automated coding cognitive construct cs.cv cs.lg eess.iv environment environments features humans identify inference inputs machine machine learning machine learning algorithms mapping maps predictive q-bio.nc sensory slam spatial type virtual virtual environments visual

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US