March 28, 2024, 8:30 p.m. | /u/autonomous_llm

Machine Learning www.reddit.com

Hi, here you can find our recent publication (along with code) in which we modify LLM internal representations to make it more truthful. In short, we optimized ITI method ( [2306.03341.pdf (arxiv.org)](https://arxiv.org/pdf/2306.03341.pdf) ) and achieved significant performance improvement. Evaluation was performed mostly on TruthfulQA, though we also tested generalization beyond it (MMLU, ARC, OpenBookQA). We used KL and CE metrics, to measure how invasive is intervention.

https://paperswithcode.com/paper/nl-iti-optimizing-probing-and-intervention

machinelearning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

AIML - Sr Machine Learning Engineer, Data and ML Innovation

@ Apple | Seattle, WA, United States

Senior Data Engineer

@ Palta | Palta Cyprus, Palta Warsaw, Palta remote