April 9, 2024, 5:56 p.m. | Roland Meertens

InfoQ - AI, ML & Data Engineering www.infoq.com

At QCon London, Loubna Ben Allal discussed Large Language Models (LLMs) for code. She discussed the lifecycle of code completion models, which consists of pre-training on vast codebases and finetuning and continuous adaptation. She specifically discussed open-source models, which are powered by platforms like Hugging Face.

By Roland Meertens

ai code code completion code generation continuous face finetuning hugging face ide language language models large language large language models lifecycle llms london ml & data engineering open-source models platforms pre-training qcon qcon london 2024 training vast

More from www.infoq.com / InfoQ - AI, ML & Data Engineering

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US