Jan. 21, 2022, 2:10 a.m. | Moiz Rauf, Sebastian Padó, Michael Pradel

cs.LG updates on arXiv.org arxiv.org

Source code summarization is the task of generating a high-level natural
language description for a segment of programming language code. Current neural
models for the task differ in their architecture and the aspects of code they
consider. In this paper, we show that three SOTA models for code summarization
work well on largely disjoint subsets of a large code-base. This
complementarity motivates model combination: We propose three meta-models that
select the best candidate summary for a given code segment. The …

arxiv code learning meta

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Lead Software Engineer - Artificial Intelligence, LLM

@ OpenText | Hyderabad, TG, IN

Lead Software Engineer- Python Data Engineer

@ JPMorgan Chase & Co. | GLASGOW, LANARKSHIRE, United Kingdom

Data Analyst (m/w/d)

@ Collaboration Betters The World | Berlin, Germany

Data Engineer, Quality Assurance

@ Informa Group Plc. | Boulder, CO, United States

Director, Data Science - Marketing

@ Dropbox | Remote - Canada