March 18, 2024, 12:08 p.m. | /u/audiencevote

Machine Learning www.reddit.com

Hi everyone!

While reviewing for NeurIPS 2024, one of the things I keep noticing is that a lot of papers only evaluate on datasets of very small size, like Cifar-10. his feels weird to me: I consider Cifar10 to be a toy-dataset and testbed for my methods, not something I'd use to show that my method actually works/is relevant in practice. So my first intuition is always "this approach probably does not scale to larger datasets". I mean, ImageNet is …

cifar-10 dataset datasets machinelearning neurips papers show small something

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US