May 10, 2024, 4:52 p.m. | /u/Plenty_Mention1787

Machine Learning

I have 780 images. All of them are microscopic and I'm doing microplastic image detection. First I did binary classification using U-Net and then VGG-16 transfer learning. Google Colab didn't crash one bit. Worked really well.

Now I'm doing multi-class segmentation and pre-processing is kinda same. except for one extra channel for colored masks.

But, just by storing the categorical masks of training dataset, my System Ram exceeds 6-7GB. I have 580 images each of size 512x512 after resize. they …

binary class classification colab dataset detection extra google image image detection images machinelearning pre-processing processing segmentation them training transfer transfer learning vgg

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Senior DevOps Engineer- Autonomous Database

@ Oracle | Reston, VA, United States