Dec. 24, 2023, 8 p.m. | Venelin Valkov

Venelin Valkov www.youtube.com

Do you need 7B+ parameters to get great performance from your Language Models? Discover how Microsoft Research's Phi-2, a 2.7 billion-parameter language model, challenges this norm by outperforming models up to 25x its size (according to Microsoft Research). We'll delve into the training methods behind Phi-2, from 'textbook-quality' training data to scaled knowledge transfer techniques. We'll load the model into a Google Colab and try it out in coding, math, reasoning, and data extraction.

Blog Post: https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/
Phi-2 on HF …

billion challenges colab google language language model language models llms microsoft microsoft research norm parameters performance phi phi-2 python research small training tutorial

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US