Nov. 24, 2023, 12:36 p.m. | /u/seventh_day123

Machine Learning www.reddit.com

OpenRLHF aims to develop a high-performance RLHF training framework based on Ray and DeepSpeed. OpenRLHF is the simplest high-performance RLHF library that supports 34B model RLHF training with 4 A100 GPUs or 7B models across multiple 24GB RTX 4090 GPUs. The PPO performance of OpenRLHF with 13B llama2 is 4 times that of DeepSpeedChat.

Currently, OpenRLHF supports:

* PPO-ptx + Multiple Reward Models
* Rejection Sampling
* DPO
* Decision Transformer

[https://github.com/OpenLLMAI/OpenRLHF](https://github.com/OpenLLMAI/OpenRLHF)

a100 deepspeed framework gpus library llama2 machinelearning multiple performance ppo ray rlhf rtx training

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Lead Data Scientist, Commercial Analytics

@ Checkout.com | London, United Kingdom

Data Engineer I

@ Love's Travel Stops | Oklahoma City, OK, US, 73120