all AI news
A Relational Inductive Bias for Dimensional Abstraction in Neural Networks
Feb. 29, 2024, 5:42 a.m. | Declan Campbell, Jonathan D. Cohen
cs.LG updates on arXiv.org arxiv.org
Abstract: The human cognitive system exhibits remarkable flexibility and generalization capabilities, partly due to its ability to form low-dimensional, compositional representations of the environment. In contrast, standard neural network architectures often struggle with abstract reasoning tasks, overfitting, and requiring extensive data for training. This paper investigates the impact of the relational bottleneck -- a mechanism that focuses processing on relations among inputs -- on the learning of factorized representations conducive to compositional coding and the attendant …
abstract abstraction architectures arxiv bias capabilities cognitive contrast cs.ai cs.lg data environment flexibility form human inductive low network networks neural network neural networks overfitting paper reasoning relational standard struggle tasks the environment training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Global Data Architect, AVP - State Street Global Advisors
@ State Street | Boston, Massachusetts
Data Engineer
@ NTT DATA | Pune, MH, IN