April 24, 2024, 8:44 p.m. | /u/Certain_End_5192

machinelearningnews www.reddit.com

I knew going into this experiment that the dataset would be effective just based on prior research I have seen. I had no idea exactly how effective it could be though. There is no point to align a model for safety purposes, you can remove hundreds of thousands of rows of alignment training with 500 rows.

I am not releasing or uploading the model in any way. You can see the video of my experimentations with the dataset here: [https://youtu.be/ZQJjCGJuVSA](https://youtu.be/ZQJjCGJuVSA)

alignment data dataset experiment machinelearningnews phi prior research safety training

More from www.reddit.com / machinelearningnews

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne