March 29, 2024, 4 p.m. | Sergio De Simone

InfoQ - AI, ML & Data Engineering www.infoq.com

Many large language models (LLMs) have become available recently, both closed and open source further leading to the creation of combined models known as Multimodal LLMs (MLLMs). Yet, few or none of them unveil what design choices were made to create them, say Apple researchers who distilled principles and lessons to design state-of-the-art (SOTA) Multimodal LLMs.

By Sergio De Simone

ai apple art artificial intelligence become computer vision design development language language models large language large language models llms ml & data engineering mllms multimodal multimodal llms open source performance researchers state them

More from www.infoq.com / InfoQ - AI, ML & Data Engineering

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Engineer - AWS

@ 3Pillar Global | Costa Rica

Cost Controller/ Data Analyst - India

@ John Cockerill | Mumbai, India, India, India