May 2, 2022, 3:10 p.m. | Charlie O'Neill

Towards Data Science - Medium towardsdatascience.com

Utilising DistilBERT and fine-tuning for text classification

Photo by Claudio Schwarz on Unsplash

Whilst it’s easy to take for granted that tools like Hugging Face make it easy to apply complex models and transfer learning to any problem we like, I thought it would be beneficial to show that these tools can actually achieve state-of-the-art (SOTA) results in an afternoon. Otherwise, what’s the point in trying?

Our task will be to predict whether a tweet is inoffensive or not. To …

art hugging face naturallanguageprocessing prediction state text classification transformers tweet twitter

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior AI & Data Engineer

@ Bertelsmann | Kuala Lumpur, 14, MY, 50400

Analytics Engineer

@ Reverse Tech | Philippines - Remote