Oct. 16, 2023, 2:32 p.m. | Jesus Rodriguez

Towards AI - Medium pub.towardsai.net

Can fine-tuning allow LLMs to unlearn existing knowledge?

Created Using Ideogram
I recently started an AI-focused educational newsletter, that already has over 160,000 subscribers. TheSequence is a no-BS (meaning no hype, no news, etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers, and concepts. Please give it a try by subscribing below:

TheSequence | Jesus Rodriguez | Substack

Large language models(LLMs) are regularly trained in …

artificial intelligence concepts educational etc fine-tuning harry potter inside knowledge large language models llms machine learning meaning microsoft microsoft research newsletter research thesequence

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne