all AI news
Evaluation of HTR models without Ground Truth Material. (arXiv:2201.06170v2 [cs.CL] UPDATED)
May 2, 2022, 1:11 a.m. | Phillip Benjamin Ströbel, Simon Clematide, Martin Volk, Raphael Schwitter, Tobias Hodel, David Schoch
cs.CL updates on arXiv.org arxiv.org
The evaluation of Handwritten Text Recognition (HTR) models during their
development is straightforward: because HTR is a supervised problem, the usual
data split into training, validation, and test data sets allows the evaluation
of models in terms of accuracy or error rates. However, the evaluation process
becomes tricky as soon as we switch from development to application. A
compilation of a new (and forcibly smaller) ground truth (GT) from a sample of
the data that we want to apply the …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Analyst - Associate
@ JPMorgan Chase & Co. | Mumbai, Maharashtra, India
Staff Data Engineer (Data Platform)
@ Coupang | Seoul, South Korea
AI/ML Engineering Research Internship
@ Keysight Technologies | Santa Rosa, CA, United States
Sr. Director, Head of Data Management and Reporting Execution
@ Biogen | Cambridge, MA, United States
Manager, Marketing - Audience Intelligence (Senior Data Analyst)
@ Delivery Hero | Singapore, Singapore