March 19, 2024, 4:44 a.m. | Francesco Orsini, Daniele Baracchi, Paolo Frasconi

cs.LG updates on arXiv.org arxiv.org

arXiv:1703.05537v2 Announce Type: replace
Abstract: We introduce an architecture based on deep hierarchical decompositions to learn effective representations of large graphs. Our framework extends classic R-decompositions used in kernel methods, enabling nested part-of-part relations. Unlike recursive neural networks, which unroll a template on input graphs directly, we unroll a neural network template over the decomposition hierarchy, allowing us to deal with the high degree variability that typically characterize social network graphs. Deep hierarchical decompositions are also amenable to domain compression, …

abstract architecture arxiv cs.lg enabling extract framework graphs hierarchical kernel learn network networks neural network neural networks part recursive relations shift stat.ml template type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada