June 12, 2023, 3:23 p.m. | MLOps.community

MLOps.community www.youtube.com

Maria Vechtomova discusses the methodology employed to evaluate ML test scores. Maria explains that they utilized existing papers focused on organizational practices, but decided to delve deeper into product-level aspects. They developed a set of 70 questions, covering topics like version control, code tracking, model artifacts, and data used for deployments. The evaluation process involved conducting interviews, scheduling calls, and sharing a SharePoint document for participants to complete. Their approach combined surveys, interviews, and documentation analysis to comprehensively assess ML …

analysis code control data documentation interviews methodology practices product questions set surveys test topics tracking version control

More from www.youtube.com / MLOps.community

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US