March 19, 2024, 2:23 p.m. |

News on Artificial Intelligence and Machine Learning techxplore.com

A team of computer scientists and engineers at Apple has developed an LLM model that the company claims can interpret both images and data. The group has posted a paper to the arXiv preprint server describing their new MM1 family of multimodal models and test results.

apple arxiv computer data engineers family images llm machine learning & ai multimodal multimodal models paper results scientists server team test text the company

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Consultant - Artificial Intelligence & Data (Google Cloud Data Engineer) - MY / TH

@ Deloitte | Kuala Lumpur, MY