May 15, 2024, 4:43 a.m. | Alex O. Davies, Riku W. Green, Nirav S. Ajmeri, Telmo M. Silva Filho

cs.LG updates on arXiv.org arxiv.org

arXiv:2311.03976v3 Announce Type: replace
Abstract: The principal benefit of unsupervised representation learning is that a pre-trained model can be fine-tuned where data or labels are scarce. Existing approaches for graph representation learning are domain specific, maintaining consistent node and edge features across the pre-training and target datasets. This has precluded transfer to multiple domains.
In this work we present Topology Only Pre-Training, a graph pre-training method based on node and edge feature exclusion. Separating graph learning into two stages, topology …

abstract arxiv benefit consistent cs.ai cs.lg data datasets domain domains edge features graph graph representation labels multiple node pre-trained model pre-training replace representation representation learning training transfer type unsupervised

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

Sr. Data Operations

@ Carousell Group | West Jakarta, Indonesia

Senior Analyst, Business Intelligence & Reporting

@ Deutsche Bank | Bucharest

Business Intelligence Subject Matter Expert (SME) - Assistant Vice President

@ Deutsche Bank | Cary, 3000 CentreGreen Way

Enterprise Business Intelligence Specialist

@ NAIC | Kansas City

Senior Business Intelligence (BI) Developer - Associate

@ Deutsche Bank | Cary, 3000 CentreGreen Way