all AI news
Fine-Tuning for Domain Adaptation in NLP
May 13, 2022, 6:05 p.m. | Marcello Politi
Towards Data Science - Medium towardsdatascience.com
Create your custom model and upload it on Hugging Face
Introduction
Often when we want to solve an NLP problem, we use pre-trained language models, obviously being careful to choose the most appropriate model that has been fine-tuned on the language of our interest.
For example, if I’m working on a project that is based on the Italian language I will use models such as dbmdz/bert-base-italian-xxl-cased or dbmdz/bert-base-italian-xxl-uncased.
These language models usually work …
data engineering data science deep learning domain adaptation fine-tuning machine learning nlp
More from towardsdatascience.com / Towards Data Science - Medium
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Engineer
@ Parker | New York City
Sr. Data Analyst | Home Solutions
@ Three Ships | Raleigh or Charlotte, NC