Web: http://arxiv.org/abs/2201.10262

Jan. 26, 2022, 2:10 a.m. | Mael Jullien, Marco Valentino, Andre Freitas

cs.CL updates on arXiv.org arxiv.org

With the methodological support of probing (or diagnostic classification),
recent studies have demonstrated that Transformers encode syntactic and
semantic information to some extent. Following this line of research, this
paper aims at taking semantic probing to an abstraction extreme with the goal
of answering the following research question: can contemporary
Transformer-based models reflect an underlying Foundational Ontology? To this
end, we present a systematic Foundational Ontology (FO) probing methodology to
investigate whether Transformers-based models encode abstract semantic
information. Following different …

arxiv language natural natural language transformers

More from arxiv.org / cs.CL updates on arXiv.org

Director, Data Science (Advocacy & Nonprofit)

@ Civis Analytics | Remote

Data Engineer

@ Rappi | [CO] Bogotá

Data Scientist V, Marketplaces Personalization (Remote)

@ ID.me | United States (U.S.)

Product OPs Data Analyst (Flex/Remote)

@ Scaleway | Paris

Big Data Engineer

@ Risk Focus | Riga, Riga, Latvia

Internship Program: Machine Learning Backend

@ Nextail | Remote job