Feb. 2, 2024, 1:31 p.m. | /u/like_a_tensor

Machine Learning www.reddit.com

In my experience, implementing research is the worst part of research. Not only is there a lack of compute at universities and debugging ML code is hard, there's no standard for implementing baselines/other people's experiments. Some papers never release their full codebase and instructions to reproduce results, and even if 2 papers evaluate on the same dataset, their data-wrangling/model code could be totally different. I end up spending weeks just getting everything to work together. Evaluating on new datasets is …

code codebase compute debugging experience machinelearning papers part people release research running standard tools universities

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineering Manager, Generative AI - Characters

@ Meta | Bellevue, WA | Menlo Park, CA | Seattle, WA | New York City | San Francisco, CA

Senior Operations Research Analyst / Predictive Modeler

@ LinQuest | Colorado Springs, Colorado, United States