all AI news
Meta Learning for Code Summarization. (arXiv:2201.08310v1 [cs.LG])
Jan. 21, 2022, 2:10 a.m. | Moiz Rauf, Sebastian Padó, Michael Pradel
cs.LG updates on arXiv.org arxiv.org
Source code summarization is the task of generating a high-level natural
language description for a segment of programming language code. Current neural
models for the task differ in their architecture and the aspects of code they
consider. In this paper, we show that three SOTA models for code summarization
work well on largely disjoint subsets of a large code-base. This
complementarity motivates model combination: We propose three meta-models that
select the best candidate summary for a given code segment. The …
More from arxiv.org / cs.LG updates on arXiv.org
Generalized Schr\"odinger Bridge Matching
1 day, 7 hours ago |
arxiv.org
Tight bounds on Pauli channel learning without entanglement
1 day, 7 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Lead Software Engineer - Artificial Intelligence, LLM
@ OpenText | Hyderabad, TG, IN
Lead Software Engineer- Python Data Engineer
@ JPMorgan Chase & Co. | GLASGOW, LANARKSHIRE, United Kingdom
Data Analyst (m/w/d)
@ Collaboration Betters The World | Berlin, Germany
Data Engineer, Quality Assurance
@ Informa Group Plc. | Boulder, CO, United States
Director, Data Science - Marketing
@ Dropbox | Remote - Canada