May 15, 2023, 12:43 a.m. | Marco Valentino, Danilo S. Carvalho, André Freitas

cs.LG updates on arXiv.org arxiv.org

Neural-based word embeddings using solely distributional information have
consistently produced useful meaning representations for downstream tasks.
However, existing approaches often result in representations that are hard to
interpret and control. Natural language definitions, on the other side, possess
a recursive, self-explanatory semantic structure that can support novel
representation learning paradigms able to preserve explicit conceptual
relations and constraints in the vector space.


This paper proposes a neuro-symbolic, multi-relational framework to learn
word embeddings exclusively from natural language definitions by jointly …

arxiv control definitions embeddings information language meaning natural natural language novel recursive representation representation learning semantic support word word embeddings

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA