all AI news
Holy $#!t: Are popular toxicity models simply profanity detectors? [D]
Feb. 4, 2022, 5:48 p.m. | /u/BB4evaTB12
Machine Learning www.reddit.com
One of the problems with real world machine learning is that engineers often treat models as pure black boxes to be optimized, ignoring the datasets behind them. I've often worked with ML engineers who can't give you any examples of false positives they want their models to fix!
Perhaps this is okay when your datasets are high-quality and representative of the real world, but they're usually not.
For example, many toxicity and hate speech datasets mistakenly flag texts like "this …
!-->More from www.reddit.com / Machine Learning
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
C003549 Data Analyst (NS) - MON 13 May
@ EMW, Inc. | Braine-l'Alleud, Wallonia, Belgium
Marketing Decision Scientist
@ Meta | Menlo Park, CA | New York City