July 22, 2023, 1:42 p.m. | /u/Qdr-91

Machine Learning www.reddit.com

As far as I know and studied, each token is mapped from high dimensional discrete token space into a continuous, lower dimensional space where words are embedded meaningfully based on their relationships in the training data. So 1000 tokens text produces 1000 vectors.


Now for vector databases (correct me if I'm wrong), people are storing fixed sized vectors for text with varying lengths. For example, 2 sentences one with 1000 tokens and the other is 10 tokens, each produces one …

continuous data embedded embedding embedding models machinelearning mapped relationships space technical text token tokens training training data vectors words

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Senior Applied Data Scientist

@ dunnhumby | London

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV