March 18, 2024, 9:16 p.m. | Erika Morphy

TechSpot www.techspot.com


Apple researchers have developed MM1, a new approach for training large language models (LLMs) that incorporate both textual and visual information. MM1 is part of a family of multimodal models that includes up to 30 billion parameters, utilizing a dataset comprising image-caption pairs, interleaved image-text documents, and text-only data, according...

Read Entire Article

ai model apple billion count data dataset documents family image information language language models large language large language models llms multimodal multimodal models objects parameters part photos researchers text textual training visual

More from www.techspot.com / TechSpot

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US