Feb. 23, 2024, 5:48 a.m. | \c{S}aziye Bet\"ul \"Ozate\c{s}, Tar{\i}k Emre T{\i}ra\c{s}, Efe Eren Gen\c{c}, Esma Fat{\i}ma Bilgin Ta\c{s}demir

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.14743v1 Announce Type: new
Abstract: This study introduces a pretrained large language model-based annotation methodology for the first dependency treebank in Ottoman Turkish. Our experimental results show that, iteratively, i) pseudo-annotating data using a multilingual BERT-based parsing model, ii) manually correcting the pseudo-annotations, and iii) fine-tuning the parsing model with the corrected annotations, we speed up and simplify the challenging dependency annotation process. The resulting treebank, that will be a part of the Universal Dependencies (UD) project, will facilitate automated …

abstract annotation annotations arxiv bert cs.cl data experimental fine-tuning iii language language model large language large language model methodology multilingual parsing show study type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Software Engineer, Data Tools - Full Stack

@ DoorDash | Pune, India

Senior Data Analyst

@ Artsy | New York City