Web: http://arxiv.org/abs/2201.12165

Jan. 31, 2022, 2:11 a.m. | Adam Małkowski, Jakub Grzechociński, Paweł Wawrzyński

cs.LG updates on arXiv.org arxiv.org

Invertible transformation of large graphs into constant dimensional vectors
(embeddings) remains a challenge. In this paper we address it with recursive
neural networks: The encoder and the decoder. The encoder network transforms
embeddings of subgraphs into embeddings of larger subgraphs, and eventually
into the embedding of the input graph. The decoder does the opposite. The
dimension of the embeddings is constant regardless of the size of the
(sub)graphs. Simulation experiments presented in this paper confirm that our
proposed graph autoencoder …

arxiv autoencoder graph space

More from arxiv.org / cs.LG updates on arXiv.org

Data Analytics and Technical support Lead

@ Coupa Software, Inc. | Bogota, Colombia

Data Science Manager

@ Vectra | San Jose, CA

Data Analyst Sr

@ Capco | Brazil - Sao Paulo

Data Scientist (NLP)

@ Builder.ai | London, England, United Kingdom - Remote

Senior Data Analyst

@ BuildZoom | Scottsdale, AZ/ San Francisco, CA/ Remote

Senior Research Scientist, Speech Recognition

@ SoundHound Inc. | Toronto, Canada