Jan. 21, 2022, 2:10 a.m. | Rong Liang, Yujie Lu, Zhen Huang, Tiehua Zhang, Yuze Liu

cs.LG updates on arXiv.org arxiv.org

Using a pre-trained language model (i.e. BERT) to apprehend source codes has
attracted increasing attention in the natural language processing community.
However, there are several challenges when it comes to applying these language
models to solve programming language (PL) related problems directly, the
significant one of which is the lack of domain knowledge issue that
substantially deteriorates the model's performance. To this end, we propose the
AstBERT model, a pre-trained language model aiming to better understand the PL
using the …

ai arxiv code language language model tree

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Technology Consultant Master Data Management (w/m/d)

@ SAP | Walldorf, DE, 69190

Research Engineer, Computer Vision, Google Research

@ Google | Nairobi, Kenya