April 9, 2024, 5:56 p.m. | Roland Meertens

InfoQ - AI, ML & Data Engineering www.infoq.com

At QCon London, Loubna Ben Allal discussed Large Language Models (LLMs) for code. She discussed the lifecycle of code completion models, which consists of pre-training on vast codebases and finetuning and continuous adaptation. She specifically discussed open-source models, which are powered by platforms like Hugging Face.

By Roland Meertens

ai code code completion code generation continuous face finetuning hugging face ide language language models large language large language models lifecycle llms london ml & data engineering open-source models platforms pre-training qcon qcon london 2024 training vast

More from www.infoq.com / InfoQ - AI, ML & Data Engineering

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist (Computer Science)

@ Nanyang Technological University | NTU Main Campus, Singapore

Intern - Sales Data Management

@ Deliveroo | Dubai, UAE (Main Office)