April 17, 2024, 1:40 p.m. | /u/LelouchZer12

Machine Learning www.reddit.com

Hi

I am aware that a lot of transformer encoder variations exist (BERT, DistilBERT, Deberta, Roberta ...).

However I am not interested in the best ones (that should probably be Deberta V3) but rather the ones that can quickly have decent results even with very few example examples (like \~50,100 sentences each containing maybe 1, 2 or 3 entities).

* I have done a few experiments in **english**, and to my surprise it seems that the one that perform best …

bert data distilbert encoder example finetuning however low machinelearning ner nlp ones results roberta transformer transformer encoder

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer - AWS

@ 3Pillar Global | Costa Rica

Cost Controller/ Data Analyst - India

@ John Cockerill | Mumbai, India, India, India