March 3, 2024, 12:03 p.m. | /u/Noprocr

Machine Learning www.reddit.com

I'm increasingly frustrated by RL's (continual-RL, meta-RL, transformers) sensitivity to hyperparameters and the extensive training times (I hate RL after 5 years of PhD research). This is particularly problematic in meta-RL continual RL, where some benchmarks demand up to 100 hours of training. This leaves little room for optimizing hyperparameters or quickly validating new ideas. Given these challenges and my readiness to explore math theory more deeply, including taking all available online math courses for a proof-based approach to avoid …

advice benchmarks communities continual demand machinelearning meta phd research room sensitivity training transformers

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne