all AI news
Sampling-based Distributed Training with Message Passing Neural Network
Feb. 26, 2024, 5:41 a.m. | Priyesh Kakka, Sheel Nidhan, Rishikesh Ranade, Jonathan F. MacArt
cs.LG updates on arXiv.org arxiv.org
Abstract: In this study, we introduce a domain-decomposition-based distributed training and inference approach for message-passing neural networks (MPNN). Our objective is to address the challenge of scaling edge-based graph neural networks as the number of nodes increases. Through our distributed training approach, coupled with Nystr\"om-approximation sampling techniques, we present a scalable graph neural network, referred to as DS-MPNN (D and S standing for distributed and sampled, respectively), capable of scaling up to $O(10^5)$ nodes. We validate …
abstract approximation arxiv challenge cs.dc cs.lg distributed domain edge graph graph neural networks inference network networks neural network neural networks nodes physics.flu-dyn sampling scaling study through training type
More from arxiv.org / cs.LG updates on arXiv.org
Testing the Segment Anything Model on radiology data
1 day, 16 hours ago |
arxiv.org
Calorimeter shower superresolution
1 day, 16 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US