April 24, 2024, 8:44 p.m. | /u/Certain_End_5192

machinelearningnews www.reddit.com

I knew going into this experiment that the dataset would be effective just based on prior research I have seen. I had no idea exactly how effective it could be though. There is no point to align a model for safety purposes, you can remove hundreds of thousands of rows of alignment training with 500 rows.

I am not releasing or uploading the model in any way. You can see the video of my experimentations with the dataset here: [https://youtu.be/ZQJjCGJuVSA](https://youtu.be/ZQJjCGJuVSA)

alignment data dataset experiment machinelearningnews phi prior research safety training

More from www.reddit.com / machinelearningnews

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US