all AI news
Few-shot training LLMs for project-specific code-summarization. (arXiv:2207.04237v2 [cs.SE] UPDATED)
Sept. 9, 2022, 1:12 a.m. | Toufique Ahmed, Premkumar Devanbu
cs.LG updates on arXiv.org arxiv.org
Very large language models (LLMs), such as GPT-3 and Codex have achieved
state-of-the-art performance on several natural-language tasks, and show great
promise also for code. A particularly exciting aspect of LLMs is their knack
for few-shot and zero-shot learning: they can learn to perform a task with very
few examples. Few-shotting has particular synergies in software engineering,
where there are a lot of phenomena (identifier names, APIs, terminology, coding
patterns) that are known to be highly project-specific. However,
project-specific data …
More from arxiv.org / cs.LG updates on arXiv.org
Regularization by Texts for Latent Diffusion Inverse Solvers
1 day, 18 hours ago |
arxiv.org
When can transformers reason with abstract symbols?
1 day, 18 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Scientist (m/f/x/d)
@ Symanto Research GmbH & Co. KG | Spain, Germany
Machine Learning Operations (MLOps) Engineer - Advisor
@ Peraton | Fort Lewis, WA, United States
Mid +/Senior Data Engineer (AWS/GCP)
@ Capco | Poland
Senior Software Engineer (ETL and Azure Databricks)|| RR/463/2024 || 4 - 7 Years
@ Emids | Bengaluru, India
Senior Data Scientist (H/F)
@ Business & Decision | Toulouse, France
Senior Analytics Engineer
@ Algolia | Paris, France