all AI news
Simple Hack for Transformers against Heavy Long-Text Classification on a Time- and Memory-Limited GPU Service
March 20, 2024, 4:48 a.m. | Mirza Alim Mutasodirin, Radityo Eko Prasojo, Achmad F. Abka, Hanif Rasyidi
cs.CL updates on arXiv.org arxiv.org
Abstract: Many NLP researchers rely on free computational services, such as Google Colab, to fine-tune their Transformer models, causing a limitation for hyperparameter optimization (HPO) in long-text classification due to the method having quadratic complexity and needing a bigger resource. In Indonesian, only a few works were found on long-text classification using Transformers. Most only use a small amount of data and do not report any HPO. In this study, using 18k news articles, we investigate …
abstract arxiv bigger classification colab complexity computational cs.ai cs.cl free google gpu hack hyperparameter memory nlp optimization researchers service services simple text text classification transformer transformer models transformers type
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
1 day, 19 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
1 day, 19 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Director, Clinical Data Science
@ Aura | Remote USA
Research Scientist, AI (PhD)
@ Meta | Menlo Park, CA | New York City