all AI news
AST-Probe: Recovering abstract syntax trees from hidden representations of pre-trained language models. (arXiv:2206.11719v1 [cs.CL])
Web: http://arxiv.org/abs/2206.11719
June 24, 2022, 1:10 a.m. | José Antonio Hernández López, Martin Weyssow, Jesús Sánchez Cuadrado, Houari Sahraoui
cs.LG updates on arXiv.org arxiv.org
The objective of pre-trained language models is to learn contextual
representations of textual data. Pre-trained language models have become
mainstream in natural language processing and code modeling. Using probes, a
technique to study the linguistic properties of hidden vector spaces, previous
works have shown that these pre-trained language models encode simple
linguistic properties in their hidden representations. However, none of the
previous work assessed whether these models encode the whole grammatical
structure of a programming language. In this paper, we …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY