all AI news
On Evaluation Metrics for Graph Generative Models. (arXiv:2201.09871v2 [cs.LG] UPDATED)
April 29, 2022, 1:12 a.m. | Rylee Thompson, Boris Knyazev, Elahe Ghalebi, Jungtaek Kim, Graham W. Taylor
cs.LG updates on arXiv.org arxiv.org
In image generation, generative models can be evaluated naturally by visually
inspecting model outputs. However, this is not always the case for graph
generative models (GGMs), making their evaluation challenging. Currently, the
standard process for evaluating GGMs suffers from three critical limitations:
i) it does not produce a single score which makes model selection challenging,
ii) in many cases it fails to consider underlying edge and node features, and
iii) it is prohibitively slow to perform. In this work, we …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Scientist (m/f/x/d)
@ Symanto Research GmbH & Co. KG | Spain, Germany
AI Scientist/Engineer
@ OKX | Singapore
Research Engineering/ Scientist Associate I
@ The University of Texas at Austin | AUSTIN, TX
Senior Data Engineer
@ Algolia | London, England
Fundamental Equities - Vice President, Equity Quant Research Analyst (Income & Value Investment Team)
@ BlackRock | NY7 - 50 Hudson Yards, New York
Snowflake Data Analytics
@ Devoteam | Madrid, Spain