May 22, 2022, 8:22 a.m. | /u/LanverYT

Machine Learning www.reddit.com

I am curious to understand the reason behind the decision to use Skip-gram rather than CBOW for these two models. According to the original Word2vec paper, CBOW is faster to train and captures syntactic similarities better whereas the skip-gram is slower at training but captures more robust semantic similarities and is also better at handling infrequent words. How does this apply to graph theory and what motivated this decision?


DeepWalk: [https://dl.acm.org/doi/abs/10.1145/2623330.2623732](https://dl.acm.org/doi/abs/10.1145/2623330.2623732)

Node2vec: [https://dl.acm.org/doi/abs/10.1145/2939672.2939754](https://dl.acm.org/doi/abs/10.1145/2939672.2939754)

deepwalk machinelearning node2vec

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Praktikum im Bereich eMobility / Charging Solutions - Data Analysis

@ Bosch Group | Stuttgart, Germany

Business Data Analyst

@ PartnerRe | Toronto, ON, Canada

Machine Learning/DevOps Engineer II

@ Extend | Remote, United States

Business Intelligence Developer, Marketing team (Bangkok based, relocation provided)

@ Agoda | Bangkok (Central World)