all AI news
A Relational Inductive Bias for Dimensional Abstraction in Neural Networks
Feb. 29, 2024, 5:42 a.m. | Declan Campbell, Jonathan D. Cohen
cs.LG updates on arXiv.org arxiv.org
Abstract: The human cognitive system exhibits remarkable flexibility and generalization capabilities, partly due to its ability to form low-dimensional, compositional representations of the environment. In contrast, standard neural network architectures often struggle with abstract reasoning tasks, overfitting, and requiring extensive data for training. This paper investigates the impact of the relational bottleneck -- a mechanism that focuses processing on relations among inputs -- on the learning of factorized representations conducive to compositional coding and the attendant …
abstract abstraction architectures arxiv bias capabilities cognitive contrast cs.ai cs.lg data environment flexibility form human inductive low network networks neural network neural networks overfitting paper reasoning relational standard struggle tasks the environment training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US