Feb. 7, 2024, 5:48 a.m. | Thilini Wijesiriwardene Ruwan Wickramarachchi Aishwarya Naresh Reganti Vinija Jain Aman Chadha Amit Sheth

cs.CL updates on arXiv.org arxiv.org

The ability of Large Language Models (LLMs) to encode syntactic and semantic structures of language is well examined in NLP. Additionally, analogy identification, in the form of word analogies are extensively studied in the last decade of language modeling literature. In this work we specifically look at how LLMs' abilities to capture sentence analogies (sentences that convey analogous meaning to each other) vary with LLMs' abilities to encode syntactic and semantic structures of sentences. Through our analysis, we find that …

analogy cs.ai cs.cl encode encoding form identification language language models large language large language models literature llms modeling nlp relationship semantic word work

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US