July 13, 2022, 12:28 p.m. | /u/Boglbert

Natural Language Processing www.reddit.com

Hi,

I am looking for a way to make abstractive summarisation (BART) and/or fine tuning of language models (especially GPT-2) more explainable.

My goal is to show a shift towards the training domain by comparing the base model and the fine tuned model. Therefore, I had a look in SHAP for both approaches but am interested in any other framework on how to visualise such learning progress.

I am happy for any leads :)

attention bart gpt languagetechnology learning

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne