March 15, 2024, 8:31 p.m. | Michael Nuñez

AI News | VentureBeat venturebeat.com

Apple researchers achieve state-of-the-art results in multimodal AI with MM1 models, combining text and images for breakthroughs in image captioning, visual question answering, and few-shot learning, as the company invests heavily in AI to enhance Siri, Messages, and future products.

ai apple apple ai art artificial intelligence automation business captioning computer science computers & electronics conversational ai data infrastructure enterprise analytics few-shot few-shot learning future image image-captioning images investments large language models llms messages ml and deep learning mllms multimodal multimodal ai nlp products programming & development question question answering researchers results science security siri state text the company visual

More from venturebeat.com / AI News | VentureBeat

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Reporting & Data Analytics Lead (Sizewell C)

@ EDF | London, GB

Data Analyst

@ Notable | San Mateo, CA