all AI news
Soft Learning Probabilistic Circuits
March 22, 2024, 4:42 a.m. | Soroush Ghandi, Benjamin Quost, Cassio de Campos
cs.LG updates on arXiv.org arxiv.org
Abstract: Probabilistic Circuits (PCs) are prominent tractable probabilistic models, allowing for a range of exact inferences. This paper focuses on the main algorithm for training PCs, LearnSPN, a gold standard due to its efficiency, performance, and ease of use, in particular for tabular data. We show that LearnSPN is a greedy likelihood maximizer under mild assumptions. While inferences in PCs may use the entire circuit structure for processing queries, LearnSPN applies a hard method for learning …
abstract algorithm arxiv circuits cs.ai cs.lg data efficiency inferences likelihood paper pcs performance show standard tabular tabular data tractable training type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Engineer - New Graduate
@ Applied Materials | Milan,ITA
Lead Machine Learning Scientist
@ Biogen | Cambridge, MA, United States