Nov. 24, 2023, 12:36 p.m. | /u/seventh_day123

Machine Learning www.reddit.com

OpenRLHF aims to develop a high-performance RLHF training framework based on Ray and DeepSpeed. OpenRLHF is the simplest high-performance RLHF library that supports 34B model RLHF training with 4 A100 GPUs or 7B models across multiple 24GB RTX 4090 GPUs. The PPO performance of OpenRLHF with 13B llama2 is 4 times that of DeepSpeedChat.

Currently, OpenRLHF supports:

* PPO-ptx + Multiple Reward Models
* Rejection Sampling
* DPO
* Decision Transformer

[https://github.com/OpenLLMAI/OpenRLHF](https://github.com/OpenLLMAI/OpenRLHF)

a100 deepspeed framework gpus library llama2 machinelearning multiple performance ppo ray rlhf rtx training

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US