Feb. 6, 2024, 4:31 p.m. | /u/lewtun

Machine Learning www.reddit.com

Hello everybody, it’s Lewis here from the research team at Hugging Face 👋.

We've been tinkering with various alignment algorithms for LLMs lately, and were curious to see if Anthropic's [Constitutional AI](https://arxiv.org/abs/2212.08073) works with open models like Mistral 7B. tl;dr it works pretty well and we've summarised our experiments and recipe [here](https://huggingface.co/blog/constitutional_ai)!

Like other works on "self-refinement", Constitutional AI works by asking models to generate responses to a set of prompts and then checking how well those responses align with …

constitutional ai face generate hello hugging face kind llms machinelearning prompts recipe research research team responses set team values

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Data Scientist (Database Development)

@ Nasdaq | Bengaluru-Affluence