Nov. 2, 2023, 4:02 p.m. | /u/cmauck10

Machine Learning www.reddit.com

Would you trust medical AI that’s been trained on pathology/radiology images where tumors/injuries were overlooked by data annotators or otherwise mislabeled? Most image segmentation datasets today contain tons of errors because it is painstaking to annotate every pixel.

[Example of bone shard not labeled properly.](https://preview.redd.it/xd0xhkz5iyxb1.jpg?width=872&format=pjpg&auto=webp&s=8e92c17fde00cec2e618504154c3f91744137bf3)

After substantial research, I'm excited to introduce **support for segmentation** in cleanlab to automatically catch annotation errors in image segmentation datasets, before they harm your models! Quickly use this new addition to detect bad data …

annotation cleanlab data datasets errors every image images machinelearning medical medical ai pathology pixel radiology research segmentation semantic support trust tumors

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Senior Associate, Data and Analytics

@ Publicis Groupe | New York City, United States