July 19, 2023, 8:56 p.m. | /u/Captain_Flashheart

Natural Language Processing www.reddit.com

A data scientist on our team is curious what would happen if we'd use subword tokenization (bert tokenization) as the tokenization step for our conventional models (word2vec, CNNs, LSTMs). The word2vec model is used for recommendation and clustering in addition to serving "just" as the embedding layers of other models. We said we'd try it out.

My own intuition is that it would decrease the quality of the word2vec model, since we want this model specifically to distinguish between things …

bert clustering cnns data data scientist embedding languagetechnology recommendation team tokenization word2vec

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV

GN SONG MT Market Research Data Analyst 11

@ Accenture | Bengaluru, BDC7A