May 19, 2022, 1:11 a.m. | Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting

cs.LG updates on arXiv.org arxiv.org

Pipelined NLP systems have largely been superseded by end-to-end neural
modeling, yet nearly all commonly-used models still require an explicit
tokenization step. While recent tokenization approaches based on data-derived
subword lexicons are less brittle than manually engineered tokenizers, these
techniques are not equally suited to all languages, and the use of any fixed
vocabulary may limit a model's ability to adapt. In this paper, we present
CANINE, a neural encoder that operates directly on character sequences, without
explicit tokenization or …

arxiv encoder free language pre-training representation tokenization training

Senior Marketing Data Analyst

@ Amazon.com | Amsterdam, North Holland, NLD

Senior Data Analyst

@ MoneyLion | Kuala Lumpur, Kuala Lumpur, Malaysia

Data Management Specialist - Office of the CDO - Chase- Associate

@ JPMorgan Chase & Co. | LONDON, LONDON, United Kingdom

BI Data Analyst

@ Nedbank | Johannesburg, ZA

Head of Data Science and Artificial Intelligence (m/f/d)

@ Project A Ventures | Munich, Germany

Senior Data Scientist - GenAI

@ Roche | Hyderabad RSS