all AI news
Large Language Models, GPT-2 — Language Models are Unsupervised Multitask Learners
Feb. 10, 2024, 4:42 p.m. | Vyacheslav Efimov
Towards Data Science - Medium towardsdatascience.com
Large Language Models, GPT-2 — Language Models Are Unsupervised Multitask Learners
Acing GPT capabilities by turning it into a powerful multitask zero-shot model
Introduction
GPT is a well-known series of models whose last versions are currently dominating in various NLP tasks. The first GPT version was a significant milestone: being trained on enormous 120M parameters, this model demonstrated state-of-the-art performance on top benchmarks. Starting from this point, researchers tried to improve the base version.
In 2019, researchers from OpenAI officially …
capabilities gpt gpt-2 language language models large language large language models machine learning nlp series tasks transformers unsupervised versions zero-shot
More from towardsdatascience.com / Towards Data Science - Medium
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne