March 5, 2024, 2:45 p.m. | Boyao Li, Alexandar J. Thomson, Matthew M. Engelhard, David Page

cs.LG updates on arXiv.org arxiv.org

arXiv:2305.17583v3 Announce Type: replace-cross
Abstract: Deep neural networks (DNNs) lack the precise semantics and definitive probabilistic interpretation of probabilistic graphical models (PGMs). In this paper, we propose an innovative solution by constructing infinite tree-structured PGMs that correspond exactly to neural networks. Our research reveals that DNNs, during forward propagation, indeed perform approximations of PGM inference that are precise in this alternative PGM structure. Not only does our research complement existing studies that describe neural networks as kernel machines or infinite-sized …

abstract arxiv cs.lg indeed interpretation networks neural networks paper propagation research semantics solution stat.ml tree type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Data Scientist (Database Development)

@ Nasdaq | Bengaluru-Affluence