Sept. 23, 2022, 1:15 a.m. | Yuhan Zhang, Wenqi Chen, Ruihan Zhang, Xiajie Zhang

cs.CL updates on arXiv.org arxiv.org

A growing body of research in natural language processing (NLP) and natural
language understanding (NLU) is investigating human-like knowledge learned or
encoded in the word embeddings from large language models. This is a step
towards understanding what knowledge language models capture that resembles
human understanding of language and communication. Here, we investigated
whether and how the affect meaning of a word (i.e., valence, arousal,
dominance) is encoded in word embeddings pre-trained in large neural networks.
We used the human-labeled dataset …

arxiv information word embeddings

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Scientist, gTech Ads

@ Google | Mexico City, CDMX, Mexico

Lead, Data Analytics Operations

@ Zocdoc | Pune, Maharashtra, India