April 26, 2024, 4:47 a.m. | Zineddine Bettouche, Anas Safi, Andreas Fischer

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.16442v1 Announce Type: new
Abstract: Managing the semantic quality of the categorization in large textual datasets, such as Wikipedia, presents significant challenges in terms of complexity and cost. In this paper, we propose leveraging transformer models to distill semantic information from texts in the Wikipedia dataset and its associated categories into a latent space. We then explore different approaches based on these encodings to assess and enhance the semantic identity of the categories. Our graphical approach is powered by Convex …

abstract arxiv challenges complexity cost cs.ai cs.cl dataset datasets information latent-space llms paper quality semantic space terms textual through transformer transformer models type wikipedia

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote