March 15, 2024, 8:31 p.m. | Michael Nuñez

AI News | VentureBeat venturebeat.com

Apple researchers achieve state-of-the-art results in multimodal AI with MM1 models, combining text and images for breakthroughs in image captioning, visual question answering, and few-shot learning, as the company invests heavily in AI to enhance Siri, Messages, and future products.

ai apple apple ai art artificial intelligence automation business captioning computer science computers & electronics conversational ai data infrastructure enterprise analytics few-shot few-shot learning future image image-captioning images investments large language models llms messages ml and deep learning mllms multimodal multimodal ai nlp products programming & development question question answering researchers results science security siri state text the company visual

More from venturebeat.com / AI News | VentureBeat

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US