Nov. 2, 2023, 4:02 p.m. | /u/cmauck10

Machine Learning www.reddit.com

Would you trust medical AI that’s been trained on pathology/radiology images where tumors/injuries were overlooked by data annotators or otherwise mislabeled? Most image segmentation datasets today contain tons of errors because it is painstaking to annotate every pixel.

[Example of bone shard not labeled properly.](https://preview.redd.it/xd0xhkz5iyxb1.jpg?width=872&format=pjpg&auto=webp&s=8e92c17fde00cec2e618504154c3f91744137bf3)

After substantial research, I'm excited to introduce **support for segmentation** in cleanlab to automatically catch annotation errors in image segmentation datasets, before they harm your models! Quickly use this new addition to detect bad data …

annotation cleanlab data datasets errors every image images machinelearning medical medical ai pathology pixel radiology research segmentation semantic support trust tumors

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne