July 16, 2023, noon | code_your_own_AI

code_your_own_AI www.youtube.com

New research by Microsoft: LongNet w/ 1billion token length for your LLM.
A new Super Intelligence as Captain Picard calls it? Or just another mathematical approximation and therefore a simplification of self-attention in Transformers by applying a simple Dilated Attention?

LONGNET: Scaling Transformers to
1,000,000,000 Tokens
https://arxiv.org/pdf/2307.02486.pdf

#ai
#largelanguagemodels
#promptengineering

approximation attention explained intelligence largelanguagemodels llm llms microsoft picard promptengineering research scaling self-attention simple token tokens transformers

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US