June 5, 2023, 9:28 p.m. | /u/Fit-Quality7938

Data Science www.reddit.com

TLDR; I have a text deduplication model running at 98% accuracy on >680 million combinations, thresholded to evenly balance sensitivity and specificity. Duplicates occur at a rate <0.5%, which means that most detections are false positives. Is there any way to reduce the rate of false positives without losing the signal?

— Full story —
I’m working for a large company that uses advertising brand names. I’ve been asked to create a model that detects duplicate names in a database …

accuracy datascience deduplication events false rate reduce running signal text tips

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US